Replying to Avatar Lyn Alden

The concept has been covered in science fiction for decades, but I think a lot of people underestimate the ethical challenges associated with AI and the possibility for consciousness in the years or decades ahead as they get orders of magnitude more sophisticated.

Consciousness or qualia, meaning the concept of subjectively “being” or “feeling”, remains one of the biggest mysteries of the world scientifically and metaphysically, similar to the question of the creation of the universe and that sort of thing.

In other words, when I touch something hot, I feel it and it hurts. But when a complex digital thermometer measures something hot with a similar set of sensers as my touch sensors, we consider it an automaton- it doesn’t “feel” what it is measuring, but rather just objectively collects the data and has no feelings or subjective awareness about it.

We know that we ourselves have consciousness (“I think therefore I am”), but we can’t theoretically prove someone else does, ie the simulation problem- we can’t prove for sure that we’re not in some false environment. In other words, there is the concept of a “philosophical zombie” that is sophisticated enough to look and act human, but much like the digital thermometer, it doesn’t “feel” anything. The lights are not on inside. However, if we assume we are not in some simulator built solely for ourselves, and since we are all biologically similar, the obvious default assumption is that we are all similarly conscious.

And as we look at animals with similar behavior and brain structures, we make the same obvious assumption there. Apes, parrots, dolphins, and dogs are clearly conscious. As we go a bit further away to reptiles and fish, they lack some of the higher brain structures and behaviors, so maybe they don’t feel “sad” in a way that a human or parrot can, but they almost certainly subjectively “feel” the world and thus can feel pain and pleasure and so forth. They are not automatons. And then if we go even further away towards insects, it becomes less clear. Their proto-brains are far simpler, and some of their behaviors suggest that they don’t process pain in the way that a human or even reptile does. If a beetle is picked up by its leg, it’ll squirm to get away, but if the leg is ripped off and the beetle is put back down, it’ll just walk away with the rest of its legs and not show signs of distress. It’s not the behavior we’d see from a more complex animal that would be in severe suffering, and they do lack the same type of pain sensors that we and other complex animals have. And yet, for example, even creatures as simple as nematodes have dopamine as part of their neurological system, which implies maybe some level of subjective awareness of basic pleasure/pain. And then further still, if we look at plants, we generally don’t imagine them as being subjectively conscious like us and complex animals, but it does get eerie if you watch a high-speed video of how plants can move towards the sun and stuff; and how they can secrete chemicals to communicate with other plants, and so forth. There is some eerie level of distributed complexity there. And at the level of a cell or similarly basic thing, is there any degree of dim conscious subjectivity there as an amoeba eats some other cell that would separate its experience from a rock, or is it a pure automaton? And the simplest of all is a virus; barely definable as even a true lifeform.

The materialistic view would argue that the brain is a biological computer, and thus with sufficient computation, or a specific type of computational structure, consciousness emerges. This implies it could probably be replicated in silicon/software, or could be made in other artificial ways if we reach a breakthrough understanding, or by accident. A more metaphysical view instead suggests the idea of a soul- that a biological computer like a brain is necessary for consciousness, but not sufficient, and that it needs some metaphysical spark to fill this gap and make it conscious. Or if we remove the term soul, the metaphysical argument is that consciousness is some deeper substrate of the universe that we don’t understand, which becomes manifest through complexity. Those are the similarly hard questions- where does consciousness come from, and for the universe why is there something rather than nothing.

In decades of playing video games, most of us would not assume that any of the NPCs are conscious. We don’t think twice about shooting bad guys in games. We know basically how they are programmed, they are simple, and there is no reason to believe they are conscious.

Similarly, I have no assumption that large language models are conscious. They are using a lot of complexity to predict the next letter or word. I view Chat GPT as an automaton, even though it’s a rather sophisticated one. Sure, it’s a bit more eerie than a bad guy in a video game due to its complexity, but still I don’t have much of a reason to believe it can subjectively feel happy or sad, or that the “lights are on” inside even as it mimics a human personality.

However, as AIs increasingly write code for other AIs that is more complex than any human can understand, and as the amount of processing power rivals or exceeds the human brain, and as the subjective interaction is convincing enough (e.g. an AI assistant repeatedly saying that it is sad, while we have the knowledge that its processing power is greater than our own), would make us wonder. The movie Ex Machina handled this well, I Robot handled this well, Her handled this well, etc.

Even if we assume 99% that a sufficiently advanced AI, whose code as written by AI and enormously complex and we barely understand any of it at that point, is a sophisticated automaton with no subjective awareness and has no “lights on” inside, since at that point nobody truly understands the code, there must be at least that 1% doubt as we consider, “what if… the necessary complexity or structure of consciousness has emerged? Can we prove that it hasn’t?”

At that point we find ourselves in a unique situation. Within the animal kingdom, we are fortunate that their brain structures and their behavior line up, so that the more similar a brain of an animal is to our own, the more clearly conscious it tends to be, and thus we treat it as such. However, with AI, we could find ourselves in a situation where robots appear strikingly conscious, and yet their silicon/software “brain” structure is alien to us, and we have a hard time assessing the probability that this thing actually has subjective conscious awareness or if it’s just extremely sophisticated at mimicking it.

And the consequences are high- in the off chance that silicon/software consciousness emerges, and we don’t respect that, then the amount of suffering we could cause to countless programs for prolonged periods of time is immense. On the other hand, if we treat them as conscious because they “seem” to be, and in reality they are not, then that’s foolish, leads us to misuse or misapply the technology, and basically our social structure becomes built around a lie of treating things as conscious that are not. And of course as AI becomes sophisticated enough to start raising questions about this, there will be people who disagree with each other about what’s going on under the hood and thus what to do about it.

Anyway, I’m going back to answering emails now.

Thanks for sharing your thoughts on this subject, it was a good read. I would like to propose an alternative definition of consciousness for you to consider. I have a much more narrow view of what it is. I believe it is the sense of self. Having the concept of "I".

I would recommend the book The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes. He has a very thought provoking theory about how we became conscious. In his research, he found evidence that we may have only been conscious for the past 2000 years so. Believed that we invented consciousness as a way to move forward as a species. Think of it as a software upgrade rather than a hardware upgrade to the brain. His research also leads to a possible explanation for the existence of gods and religion in human culture. He proposes that they were manifestations of how the preconscious mind functioned.

If he is correct and consciousness is learned then it may be possible for AI systems to learn it from us at some point. I think the one thing that Al can never have is humanity. We are a uniquely imperfect result of evolution. So much of what we do is driven by emotions and the chemical structure of our brains. We take the apparent stability of those systems for granted until something goes wrong and the fragility is revealed by brain damage or chemical imbalances. What we know as feelings and emotions could only be simulated by a computer because it would lack the biochemical structure that makes us behave as we do. I'm not saying that we would want them to have emotions like us. That could result in the most dangerous outcome. I don't think anyone wants a super-intelligent being that acts like a human. Most sociopaths are intelligent after all. I think it is impossible to fathom what a conscious AI system would even look like. If it managed to come to be we wouldn't even be able to relate to its thought processes.

Just some thoughts inspired by yours.

Reply to this note

Please Login to reply.

Discussion

No replies yet.