If I can pull out my college philosophy for a moment, the soul is classically understood to have three parts: the passions, the will, and the intellect. You can subdivide those parts, but that's not important at the moment.

An AI that can factor in logic and emotion to generate consensus can imitate the passions and the reason, but where's the will? AI does what we tell it to do. If we built one that could decide it's goals for itself, we'd be talking something human-like, but I think that's a long way off, if it's achievable at all.

Even programming an AI to be human-like doesn't necessarily solve the problem. We still gave it a directive (act like a human), and it can't will do do something else.

Reply to this note

Please Login to reply.

Discussion

Intentionality ("Will") as experienced by humans is modular, complex, and is mostly illusion.

https://sci-hub.se/10.1017/S0140525X00028636

Intentionality in bots is typically implemented very simply, there are many game and economics simulation libraries with intentionality, usually decision trees, sometimes with complex weights.

I'm not aware of any LLMs designed to have complex human-like synthesis of intentionality, but there is no theoretical or practical barrier. I guess researchers just don't want to creep out the normies!

(And there's no market for capricious, willful AIs, we have AWFLs for that).

Has human-like intentionality in artificial systems been attempted?

I would argue that at least some part of the human will is a distinct nature, and not just an illusory emergent phenomenon. We can imitate much of it with complex decision trees, but I suspect there is some piece that is wholly irreducible.

In the realm of reason, we might call the irreducible component "insight," which is the process by which utterly new ideas come out of nowhere. I'm not sure we'll be able to give machines that same spark of insight.

Insight itself is largely illusion - some semi-autonomous modules of the human mind are capable of quite sophisticated goal directed planning and behaviour even when consciousness is directing attention elsewhere (or even asleep!).

The output - the insight - is experienced as coming "out of nowhere" by the consciously accessible parts of the human mind. But this is not irreducibly different in kind from a computer CPU experiencing an interrupt from a co-processor with a calculation to deliver.

Cognitive psychology is an Alice-in-Wonderland rabbit hole sometimes! I blame Sci-Hub for making the research so accessible :-p