ChatGPT does not truly reason; it operates as a sophisticated mathematical framework that identifies patterns in vectorized text, trained on the vast repository of human intelligence. Its responses are the result of probabilistic computations, connecting ideas at incredible speed. The illusion of “human-like” conversation is, in essence, an elegant sleight of hand—a parlor trick born of complex algorithms and data.
Discussion
Right. It’s akin to system1 heuristics.
Soon will come system 2 logic chains and solution space searching
Just not yet
Agreed. Some people seem to lose sight of this.
Also, I think the model should refrain from making statements about moral obligations, but that's more wishful thinking than a realistic desire.