Intentionality ("Will") as experienced by humans is modular, complex, and is mostly illusion.

https://sci-hub.se/10.1017/S0140525X00028636

Intentionality in bots is typically implemented very simply, there are many game and economics simulation libraries with intentionality, usually decision trees, sometimes with complex weights.

I'm not aware of any LLMs designed to have complex human-like synthesis of intentionality, but there is no theoretical or practical barrier. I guess researchers just don't want to creep out the normies!

(And there's no market for capricious, willful AIs, we have AWFLs for that).

Has human-like intentionality in artificial systems been attempted?

I would argue that at least some part of the human will is a distinct nature, and not just an illusory emergent phenomenon. We can imitate much of it with complex decision trees, but I suspect there is some piece that is wholly irreducible.

In the realm of reason, we might call the irreducible component "insight," which is the process by which utterly new ideas come out of nowhere. I'm not sure we'll be able to give machines that same spark of insight.

Reply to this note

Please Login to reply.

Discussion

Insight itself is largely illusion - some semi-autonomous modules of the human mind are capable of quite sophisticated goal directed planning and behaviour even when consciousness is directing attention elsewhere (or even asleep!).

The output - the insight - is experienced as coming "out of nowhere" by the consciously accessible parts of the human mind. But this is not irreducibly different in kind from a computer CPU experiencing an interrupt from a co-processor with a calculation to deliver.

Cognitive psychology is an Alice-in-Wonderland rabbit hole sometimes! I blame Sci-Hub for making the research so accessible :-p