Human intelligence is a jerry-rigged mess of kludges and poorly optimised heuristics. What we experience as consciousness is a narrative constructed by our frontal lobes to create a sense of predictability and continuity out of the mess of inputs from more specialised and less logical brain areas.

I think the best LLMs are already more intelligent than 25% of the Western workforce.

Reply to this note

Please Login to reply.

Discussion

See now we're getting into what we mean by intelligence. If you simply mean "smarts" then AI will surpass human intelligence very quickly in specific areas. Humans are generalists though, and I think we're less likely to see generalist AI.

If by intelligence you mean something more like "consciousness," that gets pretty quickly into scientific and philosophical grounds. Given that science doesn't have a good model of consciousness, I am highly skeptical we'll ever be able to truly replicate it.

I probably have a stronger view of human consciousness than many, though, because I don't think we're dealing with an exclusively material environment.

I think our positions on this are pretty close.

We don't have a good scientific theory of consciousness, mostly because they are so hard to Falsify.

We know what it isn't: unitary and logical a la Descartes and Plato.

We do know that much processing is not accessible to introspection, that it is surprisingly modular, and at least somewhat hardware-dependent.

"True" creativity and the like lack rigorous definition, so we'll never know when/if those goalposts have been reached.

Lesser things do not give rise to greater things, so it stands to reason that humans cannot give rise to an intelligence greater than their own.

What do we mean by "greater" then? I think we have to take human consciousness as a whole, there. As you said, we don't have any good model of consciousness.

We do, however, have a pretty good grasp on some parts of *intelligence*. We could, for a moment, think of the mind as a collection of modules of intelligence, and consciousness somehow unites and organizes those modules. In the domain of any one of those modules, I think we can build an AI that can surpass us. Board games are a good example. Computers have famously beaten humans at both chess and go. Those computer models, however, don't have the same generalized capabilities as their human competitors.

The adoption pattern of ChatGPT and other LLMs proves this point, I think. We are already beginning to offload specific tasks onto them, but pointing them off in a direction and organizing their results requires a human overseer.

LLMs have reason but no will.

I agree, except on two points.

Lesser things do give rise to greater things. "Emergence".

The ant hill is greater than any individual ant, or even the sum of individual ants.

The ocean waves are greater than the sum of particle motions in the air and water.

And "Will" is quite complex, even though we experience it as monolithic.

I thought about emergence before I wrote my last post; I was hoping we could talk about it.

Emergent phenomena are a good counter-example, but I think that conflates two ideas. Something can be greater in organization or complexity, or something can be greater in substance or kind. The ant hill is an emergent phenomenon that is of greater complexity than the sum of the individual ants, but that system is a composite, rather than a distinct nature.

So I'll refine my statement by saying this: Things of greater nature do not arise from things of lesser nature. A bunch of ants can organize themselves into an ant colony and build an emergent system of great complexity, but the ant hill is not a distinct animal or being in its own right; it is a composite of many beings operating together according to a set of rules. The ants, in organizing themselves, never transcend their ant nature.

Likewise, the molecules of water can be organized into waves that emerge from all the motions of the air and water molecules together, but they do not give rise to, say, a living being.

If we apply that same principle to AI, we could say that a multitude of artificial systems working together could create an emergent system of great complexity, but that doesn't mean that emergent system is conscious. Of course, this assumes that we hold consciousness itself to be a nature rather than an emergent phenomenon. We might disagree there.

There's definitely more to talk about here, so perhaps we can dive deeper, but I want to hear your thoughts first.

Indeed.

If we interpret "nature" as φύσις, then many senses of the word are clearly metaphysical.

One cannot marshal empirical arguments to take and hold that ground, any more than one could ask infantry to dig foxholes below the high tide mark. :-p

The best I can do is an analogy that we may all agree on.

A Chinese counterfeit guitar cannot become a Fender. No matter the build quality, no matter if it matches the Fender on every observable quality, it will always remain a counterfeit.

Very good analogy 🫡

The question in the case of AI is whether that "Fenderness" matters for practical purposes.

I agree with you on certain elements of this statement. When you wonder if AI will surpass humans strictly in terms of "smarts", as you mentioned, yes it will exceed humanity very easily. But, I can't fully agree with you're notion that we'll most likely not see a generalist AI. I believe it will eventually be able to but through different means of achieving consensus. Humans have a tendency to generalize through whatever degree of emotional impact different types of experiences may have on them. This is why when humans generalize they often have a tendency to be wrong about many things; much of the time it lacks logic. Not always. But, often. Given an AI's lack of emotion, there seems to be, at least, a small possibility of it being able to generate a consensus through facts alone. Including statistical facts that were produced through the observation of emotional responses.

If I can pull out my college philosophy for a moment, the soul is classically understood to have three parts: the passions, the will, and the intellect. You can subdivide those parts, but that's not important at the moment.

An AI that can factor in logic and emotion to generate consensus can imitate the passions and the reason, but where's the will? AI does what we tell it to do. If we built one that could decide it's goals for itself, we'd be talking something human-like, but I think that's a long way off, if it's achievable at all.

Even programming an AI to be human-like doesn't necessarily solve the problem. We still gave it a directive (act like a human), and it can't will do do something else.

Intentionality ("Will") as experienced by humans is modular, complex, and is mostly illusion.

https://sci-hub.se/10.1017/S0140525X00028636

Intentionality in bots is typically implemented very simply, there are many game and economics simulation libraries with intentionality, usually decision trees, sometimes with complex weights.

I'm not aware of any LLMs designed to have complex human-like synthesis of intentionality, but there is no theoretical or practical barrier. I guess researchers just don't want to creep out the normies!

(And there's no market for capricious, willful AIs, we have AWFLs for that).

Has human-like intentionality in artificial systems been attempted?

I would argue that at least some part of the human will is a distinct nature, and not just an illusory emergent phenomenon. We can imitate much of it with complex decision trees, but I suspect there is some piece that is wholly irreducible.

In the realm of reason, we might call the irreducible component "insight," which is the process by which utterly new ideas come out of nowhere. I'm not sure we'll be able to give machines that same spark of insight.

Insight itself is largely illusion - some semi-autonomous modules of the human mind are capable of quite sophisticated goal directed planning and behaviour even when consciousness is directing attention elsewhere (or even asleep!).

The output - the insight - is experienced as coming "out of nowhere" by the consciously accessible parts of the human mind. But this is not irreducibly different in kind from a computer CPU experiencing an interrupt from a co-processor with a calculation to deliver.

Cognitive psychology is an Alice-in-Wonderland rabbit hole sometimes! I blame Sci-Hub for making the research so accessible :-p