Well, it's happening! Did you ever try to ask some AI chatbot like Bard, OpenAI some question, and get confident and argumented answer you checked afterwards and it turned out it is completly or partly false. That behaviour in AI is called "AI hallucination". I'm glad I now have the terminology to describe what happens often.

Reply to this note

Please Login to reply.

Discussion

No replies yet.