Replying to Avatar Guy Swann

Why I think the idea of “superintelligence” and “AGI” is HEAVILY exaggerated or misunderstood:

Assuming we have Ai much smarter than the average human, smarter than the typical PhD (granted smart and “PhD” are not at all equal but for the sake of simplicity). If (or when) this occurs, this will not mean Ai will just be able to invent whatever we need or make all decisions better than anyone else. And I think all we have to do is look at humans to make this simple assessment —

• If we asked a physicist and a biologist what was the most important thing to focus our time and resources on, do you suspect the physicist would find something related to physics and the biologist would find something biological?

This points to the question of speciality. What an Ai is trained on will determine what and how it values things, and there is no amount of information that will make it perfect and forever aligned with the truth at all times. It will always have a weight toward something, because the question of WHAT to value for training and for dedicating resources is present at all stages. It presupposes that we already have the answer if we assume Ai will just magically come up with it.

• in addition, the answer to “where should we devote resources” isn’t static. It changes year to year, month to month, even minute to minute sometimes. It is a question of value and judgement. The only way to sort out this relationship is through trade and competition, denoting the **necessity** of Ai that compete and exchange data and resources.

• General intelligence is useful, but extremely inefficient. Generalists are great to have for combining and relating ideas, but specialists still down into the true details and do the dirty work of real building and fine tuning of the world. Specialization isn’t just an economic phenomenon, it’s a physical reality of the universe. It will be the same with Ai, because Ai doesn’t defy universal laws, it’s just a computer program.

— A giant, trillion dollar cluster AGI will not be as valuable or produce nearly as good results or decision making capability as 10,000 much smaller and specialized AI’s focused on their own corner and trading resources with others to accomplish their task or test the ideas or paths of progress apparent from their vantage point. Nothing in nature resembles the former.

• Intelligence isn’t an omnipotent, unrestricted power. Mental intelligence isn’t the only kind of intelligence. I think as humans we have become deeply arrogant about the fact that we are “so smart” and we have begun to worship our own intelligence in such a way that if we ever imagine something smarter, then it MUST be God and it must be without any limits or flaws at all. Yet there is nothing to suggest this. The “smartest” people today often have the greatest blinders on, and everyone is only as good as the information they have and the values lens through which they see everything.

While the intelligence explosion will be shockingly disruptive and revolutionary in many ways, and while I do see it as an extremely likely outcome in the rather near future, I think the vision of a giant, all powerful AGI dropped on the world like a nuclear bomb is increasingly a projection of our own ignorance and arrogance. It simply doesn’t hold water, imo.

Covered a lot of these ideas in the 31st episode of Ai Unchained:

https://fountain.fm/episode/98UjiXJsa1b2VusbQQur

See my previous notes on this subject. We agree. I've said I don't think that AGI in unachievable, but we will discover the cure for cancer far before we discover AGI. AGI is nothing like LLMs we currently have. What we have is predictive text running on a machine with lots of compute having seen billions of examples to feed us a response. That's NOTHING like what AGI will look like. If you think AGI is around the corner, you must also think we're about to cure cancer in a few months. The compute required for the former, is an order or magnitude more than what is required for the latter.

Reply to this note

Please Login to reply.

Discussion

Absolutely agree. The idea that an LLM is anything close to AGI is maddening to me.

People want an AGI for *judgment* which is different than say for doing multivariable calculus.

The same people who so eagerly outsourced their own judgment to laughably flawed “experts” the last few years are the ones in whom the fantasy is the strongest.

Dude, yes… I see people pin all sorts of hopes and desires on AGI and it really disturbs me if I’m being honest.

I said, and try ot be clear about my wording more recently, "the intelligence explosion is in our very near future," not AGI. I increasingly think "AGI" suffers from a fundamental semantic problem of being undefinable.