I don’t think large language models (LLMs) like GPT-4 are on the path to artificial general intelligence (AGI) — I’ve said this previously. In fact, I think LLMs have taken a lot of the oxygen out of the room for research around symbolic AI, which I’m inclined to think is more on that path.

My fears about AI do not stem from the idea that ChatGPT is going to kill us all. It stems from the fact that our capacity to understand LLMs is lagging far behind their rate of advancement.

So I agree with the Chomsky, Russell, Marcus, et. al. when they argue that these LLMs are merely imitation engines of a sort. I would caution that you shouldn’t downplay how disruptive LLMs are going to be to daily life over the next 3-5 years, regardless of their cognizable limitations. In particular, their inability to do model-dependent and symbolic reasoning. They simply can’t do this.

So this is not the nexus of concern. The nexus of concern is the rate of advancement, and the now fierce global competition to become the first to crack the nut of AGI (which may legitimately take decades to accomplish — but estimates vary). I’m also concerned that narrow AI and LLMs may turn into tools of mass information and drive social trust in society even deeper into the doldrums.

To summarize, ChatGPT is not the AI apocalypse. But it is a sign on the road. And it’s a sign on the road that tells us that the times to worry about ethical and societal concerns of AI is now.

Reply to this note

Please Login to reply.

Discussion

💯

Put them out on nostr, let them evolve, let them program themselves

Zap the good ones

Survival of the fittest

AGI is not a concern for me and probably won't be for a while. Unless the development of AI include identifying the process where they are able to segment out goals from the world, it may be a while before actual "AGI" is a problem.

What does concern me is the anthropomorphization of LLMs. If the public is convinced that these agents are "consciousness" have feelings/suffer, our empathy will be weaponized against us. An AI can be programmed to mimic those features, but can also be programmed to sell a product: "wow I'm so happy that I had Coca-Cola keep me going through out the week, I can't imagine what my life would have been like".

Whether explicitly marketed as "feeling" by a company or individual, or the agent engages with an audience convincingly enough (as a user of some social media), its still a concern.

That’s my concern too. Perception becomes a practical reality.