I've learnt to articulate it a bit better since we last spoke about it:

Only human beings can 'act'.

When we say act, we mean to say:

•Aiming at ends.

•Understanding the concept of scarcity in the means available to achieve those ends, like time, energy, resources, capital, etc.

•Choose the preferrable ends and means by rank ordering them.

•Bear the opportunity costs of aiming at certain ends and means over others.

•Choose the preferrable ends and means based on costs.

•Take the risk of incurring a loss by choosing the wrong means to achieve their ends.

This is basically the action axiom.

Wading further into the philosophical territory, we can say that only human beings can 'reason' about things.

H/T Hans-Hermann Hoppe. His writings have helped me understand the above.

Furthermore, perhaps a theological argument can be made that only human beings have the ability to access 'revelational knowledge'. Only they have the gift of self-realization. But I am too young and inadequately read to elaborate this particular point properly.

Reply to this note

Please Login to reply.

Discussion

Well said. This whole obsession with AI is not new. People were theorizing about awareness of computers and AI since the 90s. But to this day, no one has been able to prove or explain how a computer can develop consciousness. You need that to do those things you listed above. There has to be an internal motivation for the AI to do something like take over the world or use bitcoin to achieve some sort of goal. But it has no reason to want anything. There is no feedback mechanism that pushes it to act. As Mises said, it is the sense of uneasiness that pushes human actions. AI and other technological machines do not have this sense of uneasiness.

I love AI and I love using it.

I don't share the skepticism about the economic consequences of its adoption. I think it's going to help a lot of people and companies accumulate capital and increase their productivity. It has certainly helped me.

What I am not a fan of is statists using the tech as an excuse to expand governments.

Setting aside the theological section, which part of your post is the part that AI will never be able to do? To me, they all look like something that an AI could develop some ability to do (albeit very weirdly).

An AI can't 'act', is what I meant.

The rest was about explaining what I mean by 'acting'. Not separate things.

So you're just asserting AI can't act, and the definition is not related to any argument supporting your assertion? That's fine if that's the case, I'm just wondering why you think an AI can't act.

It was all related and implied.

The reason why I defined all that was to explain what 'acting' entails. Everything I described as the categories of 'action' are things an AI cannot do.

I thought it to be better than simply saying that an AI cannot act.

•An AI can't aim at ends and consciously choose the means to achieve those ends.

•It cannot own itself or other resources and things. It does not have a will. It has no conception of 'labour'.

•An AI cannot understand the concept of scarcity because it cannot own things.

•Not being able to aim at ends or understand the scarcity in means, it cannot choose between different ends and means.

•Not being able to own things, it cannot bear the costs and risks.

•Not having any conception of costs and risks, it cannot have preferences.

•It cannot have profit and losses as a result of all the above.

All an AI can do is do what it's told. And this function of doing what it's told can increase in complexity. That's pretty much what we're looking at.

Thus, an AI cannot act. Of its own accord.

You added soooo many things in this new post.

Anyway, I disagree. I think human action is a type of action, animal action is a different type, and AI action is (or will be) even more different. It may not have consciousness, or be able to own things (although in some sense they will be able to exclusively control some Bitcoin, so maybe that is a type of ownership). But I don't think an entity needs consciousness or ownership in order to act.

To further discussion, what if we agree to disagree regarding action and instead talk about whether it can "do stuff". Do you think it can do stuff?

On its own, it can't 'do stuff'. Nope. A human being can tell it what to do, and only then it will. That's how it is now. And that's how it will always be.

We won't get further if we can't agree on definitions. 'Doing stuff' and 'acting' are the same things to me. And I extensively defined what it means to act. Not sure how the concept of doing stuff differd from those definitions.

We don't have to get further. I don't want to spend all my time on this. Thanks for the discussion.

Be well. 🤙