It was all related and implied.
The reason why I defined all that was to explain what 'acting' entails. Everything I described as the categories of 'action' are things an AI cannot do.
I thought it to be better than simply saying that an AI cannot act.
•An AI can't aim at ends and consciously choose the means to achieve those ends.
•It cannot own itself or other resources and things. It does not have a will. It has no conception of 'labour'.
•An AI cannot understand the concept of scarcity because it cannot own things.
•Not being able to aim at ends or understand the scarcity in means, it cannot choose between different ends and means.
•Not being able to own things, it cannot bear the costs and risks.
•Not having any conception of costs and risks, it cannot have preferences.
•It cannot have profit and losses as a result of all the above.
All an AI can do is do what it's told. And this function of doing what it's told can increase in complexity. That's pretty much what we're looking at.
Thus, an AI cannot act. Of its own accord.