When a GPT model responds to a question and we humans interpret the reply as intelligent it is we who hallucinate. I don't mean these models are not useful, powerful, or admirable. They are wonderful. I mean the model is not thinking or reasoning in the first place therefore it is incapable of hallucinating. When the model reponds with an unsatisfactory answer it is doing nothing differently than when it responds with a satisfactory answer. We are humans. We think and reason. We have bodies and brains related to our minds. Granting the models the capacity to "hallucinate" gives a false impression that they had a right mind from which they deviated. So it is we humans who hallucinate and not the GPT models.

Reply to this note

Please Login to reply.

Discussion

Indeed. Possibly, this began as vernacular used by the AI scientists, engineers, and devs as a shorthand within their community to refer to certain complex AI response issues. Then, the term became used in the public space without the public having the insight as to what it is actually referring to.