I don’t think large language models (LLMs) like GPT-4 are on the path to artificial general intelligence (AGI) — I’ve said this previously. In fact, I think LLMs have taken a lot of the oxygen out of the room for research around symbolic AI, which I’m inclined to think is more on that path.
My fears about AI do not stem from the idea that ChatGPT is going to kill us all. It stems from the fact that our capacity to understand LLMs is lagging far behind their rate of advancement.
So I agree with the Chomsky, Russell, Marcus, et. al. when they argue that these LLMs are merely imitation engines of a sort. I would caution that you shouldn’t downplay how disruptive LLMs are going to be to daily life over the next 3-5 years, regardless of their cognizable limitations. In particular, their inability to do model-dependent and symbolic reasoning. They simply can’t do this.
So this is not the nexus of concern. The nexus of concern is the rate of advancement, and the now fierce global competition to become the first to crack the nut of AGI (which may legitimately take decades to accomplish — but estimates vary). I’m also concerned that narrow AI and LLMs may turn into tools of mass information and drive social trust in society even deeper into the doldrums.
To summarize, ChatGPT is not the AI apocalypse. But it is a sign on the road. And it’s a sign on the road that tells us that the times to worry about ethical and societal concerns of AI is now.