So you are meaning humanoid specifically? 🤔

I mean LLMs are literally a piece of software that predicts words. If you go to the open Ai playground you can even see the probability of each word over the response. In that regard, you *should* be nice simply because you are likely to get a better answer and more cooperative conversation than if you are mean. But that’s basically because it’s trained on human conversation. I mean, for the conversations on stack exchange where someone calls the guy who answered a dickhead, what is the probability of a positive response?

That’s why I am always nice and say thank you to the LLMs when I use them. Because it’s practical and makes probabilistic sense. But it’s not because I think it’s sentient or has feelings.

So in that sense, if I get mad at my computer one day, take it out and beat it into the ground Office Space style… then I get my freedom taken, I’m abused, stolen from, and my ode is ruined because there was an LLM on it… then where is the real injustice exactly?

We need to be careful about assigning sentience to something when we can’t even agree on the definition. *Law* means the moral right to use violence against people, ultimately to kill them for persistent refusal of literally any legal precedent. It should only be considered when absolutely necessary and obviously immoral situations.

Reply to this note

Please Login to reply.

Discussion

No replies yet.