Has human-like intentionality in artificial systems been attempted?
I would argue that at least some part of the human will is a distinct nature, and not just an illusory emergent phenomenon. We can imitate much of it with complex decision trees, but I suspect there is some piece that is wholly irreducible.
In the realm of reason, we might call the irreducible component "insight," which is the process by which utterly new ideas come out of nowhere. I'm not sure we'll be able to give machines that same spark of insight.