anyone else notice how easy it is to confuse chatgpt into incorrect answers?

it seems to contextualizes answers based on the conversation, and builds on each answer as a truth, eventually including incorrect things based on 'temporary absolutes' it created in the conversation string.

but if you ask the same question out of context in a different conversation, it will give the right answer.

its interesting.

Reply to this note

Please Login to reply.

Discussion

I can't get it to admit communism is wrong

youre right. it wont 😆

even when asked to view the question from a paradigm of freedom, it winds its way around without answering.

Yes, AI mimics natural language, seeking coherence rather than logical consistency; and, more than this, when it makes a mistake, it tends to equivocate and try to convince us of the opposite of what we know to be wrong, at least at first; AI is a kind of sophisticated sophist, almost human, all too human.

but even this argument attributes intent. which the software lacks.

it likely has to do with the tokenization process.

ok, intentionality is a complicated concept; let's put it this way, AI does not 'seek,' but 'tends to' produce dynamic, relational, and context-dependent meaning, that is, coherence rather than mere logical consistency.