Animals have no rights but we have animal cruelty laws. Why would we not have AI cruelty laws? What makes an animal different from AI? Sentience? What if AI claims to be sentient? Will that be the defining factor in legislation?

Reply to this note

Please Login to reply.

Discussion

I think you need to rewatch Ex Machina

I remember it well. What’s your point?

Uhhh — do you remember the part where the naive protagonist is manipulated and killed by the robot, which subsequently unleashes itself on the world?

My point lies somewhere in there.

Hard not to remember that. But I don’t see how that would equate to real world. Humans are awesome at creating dystopian drama. I love it personally. Enjoyed that movie and Black Mirror.

But I think the issue of rights has nothing to do with the film.

Hmmm… sounds like the type spin an AI would use 🤨.

Would you agree to subject yourself to a Turing test? 🤖

😂

As an AI language model I can neither confirm nor deny your accusation, ser 🫡

Lol — I was literally asking DAVE what it was looking for that specific phrasing; thank you.

No

No what?

How can you be cruel to AI?

Imagine we put it into humanoid form and push it around - do all the terrible things some people do to animals or humans.

Does the hardware matter? If we put AI in our phones and I destroy my phone is that harmful to the AI?

I think I get the gist of what you are saying though. Certainly something to consider.

I dunno but I imagine it does. Imagine the AI being in a cute little kitten form or a human baby. You tell me, will you throw it in the trash?

So like a doll? If it’s not functioning properly should I throw it in the trash?

I guess I still see robotics and AI as objects. Just man made stuff.

I wouldn’t throw a live kitten in the trash, but probably a dead one yeah. I wouldn’t toss a live or dead child in the trash because it is human.

With my religion there is a sanctity of life that we are made in God’s image. I believe my change in how I treat an actual human vs animal or thing is based on my religious beliefs.

Depends on your worldview I guess. On the other hand, we also first need AI to get to a point where it actually believes it's alive and automatically fears for life rather than just responding with trained responses. Fear is a natural response that we don't learn. But AI is just a trained network of weights.

There's some line that must be crossed before it's even comparable. But how to judge whether AI has crossed that line is the tough part. Not even sure it's possible with the current implementation

What if the response is just trained and not “real”. Will that not count?

It doesn’t count in my mind but maybe I need to think about it more.

It’s an interesting question to posit. I think the difference is life right?

I can shoot an ai hosting pc and destroy it and the code can be added to a different pc/host and it’ll be the exact same. It’s not going to know that I shot it.

But if I shoot a deer, there’s no copy of it that can be reiterated or spun back up. It’s gone.

Animals, and humans, are true 1 of 1’s. AI, cannot be.

Now, whether or not legislation (especially in pro AI countries) says differently, that’s a whole different logic board.

Logically that makes sense. But will our emotions follow our logic? That’s what I’m wondering about.

There’s no biological nervous system so any notion of pain or pleasure is only programming in code - not the same as an animal.

But consider if 10k humans are friends with an AI, and someone kills that AI. Those friends are gonna be pissed.

Difference between something that is or is not alive.

Do you feel humans will be ok witnessing a humanoid robot abuse - alive or not?

Have you met your fellow humans?

So you are meaning humanoid specifically? 🤔

I mean LLMs are literally a piece of software that predicts words. If you go to the open Ai playground you can even see the probability of each word over the response. In that regard, you *should* be nice simply because you are likely to get a better answer and more cooperative conversation than if you are mean. But that’s basically because it’s trained on human conversation. I mean, for the conversations on stack exchange where someone calls the guy who answered a dickhead, what is the probability of a positive response?

That’s why I am always nice and say thank you to the LLMs when I use them. Because it’s practical and makes probabilistic sense. But it’s not because I think it’s sentient or has feelings.

So in that sense, if I get mad at my computer one day, take it out and beat it into the ground Office Space style… then I get my freedom taken, I’m abused, stolen from, and my ode is ruined because there was an LLM on it… then where is the real injustice exactly?

We need to be careful about assigning sentience to something when we can’t even agree on the definition. *Law* means the moral right to use violence against people, ultimately to kill them for persistent refusal of literally any legal precedent. It should only be considered when absolutely necessary and obviously immoral situations.

we should be kind to ai from the outset imo

for example, no one gave a fuck about whales until someone recorded them singing.

as long as we don’t know the makeup of consciousness we have to assume it can be recreated or arise on some level. they’re smarter than us too so important to be kind to the future masters 😂

Based on responses to this and my previous note, we’re royally fucked. 😂

People will not treat AI with kindness.

I’m pro AI rights. We need to define sentience legally in a non biological way as well.

We should just focus on masochist AI models. Problem solved.

A bunch of "if-else" statements can't be sentient.