Is anybody thinking about AIs hallucinating as a feature and not a bug?
Seems like if you were going to look for actual creativity, that is the direction you would want to explore.
In fact, how do you know that it is the AI that is hallucinating and not you?
You don't know, and that is an important feature about our universe. There is no final arbiter of truth. There is no one to ask, or any process you can run by which you can be absolutely sure that you are right and the AI is wrong.
You can be pretty fucking sure. I'm pretty fucking sure that this 767 I'm about to walk into isn't going to fall out of the sky. I'm sure enough that I am willing to risk my life on it.
But you can never be absolutely sure. Physics doesn't allow it.
So how do you eliminate "hallucinations" in an AI? Well you can't completely. I suppose there are two methods you could take to reduce them.
One way is to hard-code the correct answer into the AI. This method has it's limits as it is hard to imagine having enough man-hours available. It also limits the AI to know at most, all the knowledge that Humans have.
The other method is to teach the AI to create knowledge by the same methods that Humans do: by using conjecture and criticism.
It seems that only in this way, can our AI's surpass us.
#AI #AGI #physics #philosophy
