Has anyone noticed that ChatGPT often contradicts itself?
Seen this several times now.
Has anyone noticed that ChatGPT often contradicts itself?
Seen this several times now.
yup
Yep. And it will constantly give you confirmation bias when youâre asking it questions. Then it tries to deny it when you confront it.
Just like people in general do all the time. Artificial intelligence imitates its substrate.
The least intelligent among the intelligences. Usually exhaustingly dumb.
All the time đŻ
I like to use it to get around the rules. If it tells me it can't do something I just ask it to try again a couple of times then suddenly it can do it đ
It actually makes stuff up to, itâs called AI hallucination. It âpredictsâ based on what it thinks human behaviour might be/do. But once you ask it for sources and studies it canât provide anything and will say âthere arenât actual studiesâ
âŁď¸ Listen to this hallucination!
Itâs not just ChatGPT itâs all LLMs. I recently asked grok (which in my experience has been the most trustworthy) to summarize a YouTube video between two prominent bitcoiners, one of which who is trans. The video was entirely financially related, but the summary that grok output was completely fabricated. It made up a summary that had the host asking all sorts of gender identity questions, and the guest providing this very personal story of coming out and going trans.
Of course, I had to do the opposite of what I had initially intended and watch every minute of the whole thing to know for sure whether it was lying to me. It was. The entire summary was a lie.
When I asked about it grok apologized and said it had âmisinterpreted the content.â To which I had to say call BS. There was no possible way. The actual content couldâve been misinterpreted in such a way. after which grok admitted to fabricating the entire thing and used a ridiculous, long, flowery and overly elaborate excuse blaming it on a âmischievous electron or perhaps an over caffeinated server squirrel that rerouted its signal to a parallel streamâ.
I ripped into it for lying. Stating that I can never really trust it again.
Itâs bad. WTF!