Lesser things do not give rise to greater things, so it stands to reason that humans cannot give rise to an intelligence greater than their own.

What do we mean by "greater" then? I think we have to take human consciousness as a whole, there. As you said, we don't have any good model of consciousness.

We do, however, have a pretty good grasp on some parts of *intelligence*. We could, for a moment, think of the mind as a collection of modules of intelligence, and consciousness somehow unites and organizes those modules. In the domain of any one of those modules, I think we can build an AI that can surpass us. Board games are a good example. Computers have famously beaten humans at both chess and go. Those computer models, however, don't have the same generalized capabilities as their human competitors.

The adoption pattern of ChatGPT and other LLMs proves this point, I think. We are already beginning to offload specific tasks onto them, but pointing them off in a direction and organizing their results requires a human overseer.

LLMs have reason but no will.

Reply to this note

Please Login to reply.

Discussion

I agree, except on two points.

Lesser things do give rise to greater things. "Emergence".

The ant hill is greater than any individual ant, or even the sum of individual ants.

The ocean waves are greater than the sum of particle motions in the air and water.

And "Will" is quite complex, even though we experience it as monolithic.

I thought about emergence before I wrote my last post; I was hoping we could talk about it.

Emergent phenomena are a good counter-example, but I think that conflates two ideas. Something can be greater in organization or complexity, or something can be greater in substance or kind. The ant hill is an emergent phenomenon that is of greater complexity than the sum of the individual ants, but that system is a composite, rather than a distinct nature.

So I'll refine my statement by saying this: Things of greater nature do not arise from things of lesser nature. A bunch of ants can organize themselves into an ant colony and build an emergent system of great complexity, but the ant hill is not a distinct animal or being in its own right; it is a composite of many beings operating together according to a set of rules. The ants, in organizing themselves, never transcend their ant nature.

Likewise, the molecules of water can be organized into waves that emerge from all the motions of the air and water molecules together, but they do not give rise to, say, a living being.

If we apply that same principle to AI, we could say that a multitude of artificial systems working together could create an emergent system of great complexity, but that doesn't mean that emergent system is conscious. Of course, this assumes that we hold consciousness itself to be a nature rather than an emergent phenomenon. We might disagree there.

There's definitely more to talk about here, so perhaps we can dive deeper, but I want to hear your thoughts first.

Indeed.

If we interpret "nature" as φύσις, then many senses of the word are clearly metaphysical.

One cannot marshal empirical arguments to take and hold that ground, any more than one could ask infantry to dig foxholes below the high tide mark. :-p

The best I can do is an analogy that we may all agree on.

A Chinese counterfeit guitar cannot become a Fender. No matter the build quality, no matter if it matches the Fender on every observable quality, it will always remain a counterfeit.

Very good analogy 🫡

The question in the case of AI is whether that "Fenderness" matters for practical purposes.