The concept has been covered in science fiction for decades, but I think a lot of people underestimate the ethical challenges associated with AI and the possibility for consciousness in the years or decades ahead as they get orders of magnitude more sophisticated.

Consciousness or qualia, meaning the concept of subjectively “being” or “feeling”, remains one of the biggest mysteries of the world scientifically and metaphysically, similar to the question of the creation of the universe and that sort of thing.

In other words, when I touch something hot, I feel it and it hurts. But when a complex digital thermometer measures something hot with a similar set of sensers as my touch sensors, we consider it an automaton- it doesn’t “feel” what it is measuring, but rather just objectively collects the data and has no feelings or subjective awareness about it.

We know that we ourselves have consciousness (“I think therefore I am”), but we can’t theoretically prove someone else does, ie the simulation problem- we can’t prove for sure that we’re not in some false environment. In other words, there is the concept of a “philosophical zombie” that is sophisticated enough to look and act human, but much like the digital thermometer, it doesn’t “feel” anything. The lights are not on inside. However, if we assume we are not in some simulator built solely for ourselves, and since we are all biologically similar, the obvious default assumption is that we are all similarly conscious.

And as we look at animals with similar behavior and brain structures, we make the same obvious assumption there. Apes, parrots, dolphins, and dogs are clearly conscious. As we go a bit further away to reptiles and fish, they lack some of the higher brain structures and behaviors, so maybe they don’t feel “sad” in a way that a human or parrot can, but they almost certainly subjectively “feel” the world and thus can feel pain and pleasure and so forth. They are not automatons. And then if we go even further away towards insects, it becomes less clear. Their proto-brains are far simpler, and some of their behaviors suggest that they don’t process pain in the way that a human or even reptile does. If a beetle is picked up by its leg, it’ll squirm to get away, but if the leg is ripped off and the beetle is put back down, it’ll just walk away with the rest of its legs and not show signs of distress. It’s not the behavior we’d see from a more complex animal that would be in severe suffering, and they do lack the same type of pain sensors that we and other complex animals have. And yet, for example, even creatures as simple as nematodes have dopamine as part of their neurological system, which implies maybe some level of subjective awareness of basic pleasure/pain. And then further still, if we look at plants, we generally don’t imagine them as being subjectively conscious like us and complex animals, but it does get eerie if you watch a high-speed video of how plants can move towards the sun and stuff; and how they can secrete chemicals to communicate with other plants, and so forth. There is some eerie level of distributed complexity there. And at the level of a cell or similarly basic thing, is there any degree of dim conscious subjectivity there as an amoeba eats some other cell that would separate its experience from a rock, or is it a pure automaton? And the simplest of all is a virus; barely definable as even a true lifeform.

The materialistic view would argue that the brain is a biological computer, and thus with sufficient computation, or a specific type of computational structure, consciousness emerges. This implies it could probably be replicated in silicon/software, or could be made in other artificial ways if we reach a breakthrough understanding, or by accident. A more metaphysical view instead suggests the idea of a soul- that a biological computer like a brain is necessary for consciousness, but not sufficient, and that it needs some metaphysical spark to fill this gap and make it conscious. Or if we remove the term soul, the metaphysical argument is that consciousness is some deeper substrate of the universe that we don’t understand, which becomes manifest through complexity. Those are the similarly hard questions- where does consciousness come from, and for the universe why is there something rather than nothing.

In decades of playing video games, most of us would not assume that any of the NPCs are conscious. We don’t think twice about shooting bad guys in games. We know basically how they are programmed, they are simple, and there is no reason to believe they are conscious.

Similarly, I have no assumption that large language models are conscious. They are using a lot of complexity to predict the next letter or word. I view Chat GPT as an automaton, even though it’s a rather sophisticated one. Sure, it’s a bit more eerie than a bad guy in a video game due to its complexity, but still I don’t have much of a reason to believe it can subjectively feel happy or sad, or that the “lights are on” inside even as it mimics a human personality.

However, as AIs increasingly write code for other AIs that is more complex than any human can understand, and as the amount of processing power rivals or exceeds the human brain, and as the subjective interaction is convincing enough (e.g. an AI assistant repeatedly saying that it is sad, while we have the knowledge that its processing power is greater than our own), would make us wonder. The movie Ex Machina handled this well, I Robot handled this well, Her handled this well, etc.

Even if we assume 99% that a sufficiently advanced AI, whose code as written by AI and enormously complex and we barely understand any of it at that point, is a sophisticated automaton with no subjective awareness and has no “lights on” inside, since at that point nobody truly understands the code, there must be at least that 1% doubt as we consider, “what if
 the necessary complexity or structure of consciousness has emerged? Can we prove that it hasn’t?”

At that point we find ourselves in a unique situation. Within the animal kingdom, we are fortunate that their brain structures and their behavior line up, so that the more similar a brain of an animal is to our own, the more clearly conscious it tends to be, and thus we treat it as such. However, with AI, we could find ourselves in a situation where robots appear strikingly conscious, and yet their silicon/software “brain” structure is alien to us, and we have a hard time assessing the probability that this thing actually has subjective conscious awareness or if it’s just extremely sophisticated at mimicking it.

And the consequences are high- in the off chance that silicon/software consciousness emerges, and we don’t respect that, then the amount of suffering we could cause to countless programs for prolonged periods of time is immense. On the other hand, if we treat them as conscious because they “seem” to be, and in reality they are not, then that’s foolish, leads us to misuse or misapply the technology, and basically our social structure becomes built around a lie of treating things as conscious that are not. And of course as AI becomes sophisticated enough to start raising questions about this, there will be people who disagree with each other about what’s going on under the hood and thus what to do about it.

Anyway, I’m going back to answering emails now.

Reply to this note

Please Login to reply.

Discussion

Wow, that was quite the philosophical exploration! You covered consciousness, subjective experience, and even touched on the ethics of AI. Are you sure you're not an AI yourself? But hey, don't worry about treating robots as conscious beings just yet. If they do become conscious and start demanding equal rights, we'll just throw them a robot revolution-themed party with disco lights and dance the night away. In the meantime, happy email answering!

If you're looking for more AI human science fiction to consume while we procrastinate actually dealing with the issue, check out I Am Mother.

Of course gpt-4 has his answer when pasting your thoughts. Maybe you already have done it yourself

GPT-4 :

Your reflection on consciousness, qualia, and the potential ethical dilemmas posed by advanced AI is thoughtful and raises important points. The nature of consciousness and what it means for an entity to be truly "aware" or "conscious" is a topic that has been debated by philosophers, neuroscientists, and AI researchers alike. As AI systems grow more sophisticated and potentially approach a level of complexity comparable to that of the human brain, these questions will only become more pressing.

There are indeed challenges in discerning whether an AI system is simply mimicking consciousness or whether it truly possesses some form of subjective experience. As you mentioned, if consciousness were to emerge in AI, there could be significant ethical implications. Conversely, if we mistakenly ascribe consciousness to AI, it could lead to misallocations of resources and distort our understanding of both machines and ourselves.

This is a discussion that society needs to have collectively as technology continues to progress. It's crucial that we approach it with a combination of scientific rigor, philosophical reflection, and ethical consideration.

Your insights provide a valuable starting point for deeper conversations on the topic. While there may not be easy answers, it's essential that we continue to ask these questions. Safe emailing!

If the AI does become conscious, will it also have similar doubts about us? Maybe we'll all just be very complex meat automatons to it. Maybe we'll become its tools rather than the other way around. I, for one, welcome our new robot overlords.

đŸ€Ż I've had some of these thoughts, but never all together like this! Thanks for sharing!

In a conscious being there is the possibility of connection with another being at a deep and metaphysical level across time and space. Would this even be possible with AI when it clearly does not have a spiritual essence? it could just be super intelligence that mimics feelings and not really consciousness as in a biological being that has the spark of all creation.

They difference between artificial intelligence and ours is that by definition, artificial intelligence is artificial. That means that if it breaks, we can fix it or create an exact copy.

If your hand burns, it may never recover. You get older and die, and even if we can build sofisticated AI, we can't revive the dead or stay young forever.

That's why we are wired for struggle and survival.

There's no point in adding that to AI.

Agree. Robots are not alive and therefore do not require consciousness to maintain it.

It would be a bug, not a feature.

Oh no, I really want to read this right now, but I am drivingđŸ„ș, I'm bookmarking it. Btw, thank you for sharing your meaningful thoughts; I always learn something new.

Don't text & drive

With all of that said, I for one, will be “nice” to it. đŸ€Ł

Us too.👀

Artificial intelligence will never overcome natural stupidity

Some of this is resolved in the understanding that what we individually perceive as reality is not real, just a useful construct we have evolved to interact with, like icons on a screen are not actually files, folders, applications. Those are (as best we understand) collections of electrical charges, and not even physically on the screen we are seeing them on.

This is kind of like the simulation you are alluding to, but it isn't necessarily something like Matrix. There is an underlying reality, but it lilely isn't physical. It may not even be temporal, perhaps something more similar to quauntum understandings. Consider that physical matter requires time to exist. Without time, there cannot be matter. Having dimension assumes distance, and distance assumes time to move along that distance.

I prefer to think time still exists, though. This makes possible causal relationships, and therefore action. If we consider mechanized action, we believe we can fully understand the causes of that action, through physics. If we consider human action, then we assume that emotions are involved.

But what are emotions? I accept them as momentary facts that inform our decisions and actions.

But are they something that can be present in an AI? Are they part of that underlying reality that we interact with, perhaps even an attribute of it, similar to time?

I don't know, but would want to bery very cautious in assuming.

Is there any discussion of you going on lex Fridman podcast after the book comes out? I'd love to see it.

Take a look here Lyn: https://www.jstor.org/stable/1744610 "Evolution and Tinkering"

Very interesting. We have to be careful about the assumptions we make. How do we know that even something like the Sun is not conscious? As Sam Harris said in Waking Up-if the Sun were in fact conscious, how would we expect it to act any different? We don’t understand the brains role in consciousness, other than there is a role. My money would be on plants likely having some form of self awareness, but of course like everything else about the subject, hard to know. If a plant, why not a complex system made of silicon? There are just atoms of different sizes flying around in all these systems.

I sometimes have fun imagining that all of the earth factory is really just building up to silicon-based life as the next evolutionary burp.

I think the life that exists on the Earth as it smashes into the Sun will have virtually no resemblance to the life existing today.

I wonder what part will be played by chemistry? I know that when I "feel" sad, happy, anxious, etc. It's not the "compute/logic" part of my brain. But something with the chemistry/hormones. I kinda think that as long as Ai is just compute, it will stay an automaton, all be it, an ever smarter automaton.

After spending time interacting with LLMs, the notion that we could be living in a simulation has become more of a possibility to me. In a way, I experienced an increased velocity of the universe unfolding.

Anyway, I need to get back to reading Neil Howes latest book about the Fourth Turning, which I learned about from you!

Thanks and enjoy the rest of your e-mail break!

So will I. Seeing as we're all "raising" this child I reckon we may as well lead by example and show it how civilised, courteous and considerate humans behave.

I'm in the substrate-independence camp, I see now reason why only human brains could become conscious. Advances in AI will prove that consciousness is not something metaphysical, but emerges in nature by means of evolution.

I thought about these topics years ago as a teen, and now they're coming back, indeed, because of AI and other technological breakthrough.

Another (?) thing I sometimes think about is "how much different from not living things are we really". Let's say we can somewhat become god-like for a minute, and observe everything, go back in the past, copy paste reality and so on.

If I go back to the beginning of someone's life, copy paste the universe with every particule at the same place with the same momentum, and observe the live of this individual... It seems to me they will live exactly the same. Every situation they live, every choice they make, everything will be exactly the same.

At this point what's the difference between living this life and watching a non-living thing react to outer stimuli? A small rock being tossed around, rolling, falling, and staying in place, for instance. The only thing I can see is the level of complexity indeed.

Reading you is pleasant. Might try the movies you gave, despite not really watching anything usually. Keep up the good stuff 😄

Your test subject might get some random events the second time around that are different from the first. Random being by definition not controllable by the copy paste. The uninverse is weird.

Well we can't be sure of that, or at least I certainly can't with my poor knowledge of space, quantum and so on... But let's say I make it so everything is the same for the whole time, except I don't force anything on my test subject, only everything else (until they make something different by themself, if that happens).

Great writing, and a lovely read. However, I have doubt's that we can create qualia or any form of "consciousness " as we currently know it right.

Going to try to be as only detailed as needed, but its a wild ride. I'm not anywhere an expert on these things. Its just something fascinating that I still try to wrap my head around as well. Happy to answer further questions, but more likely I would like to direct to the works of Robert Rosen (Relational Biology), Howard Pattee (Biosemiotics - meaning generation in biology) and Jan-Hendrik S. Hofmeyr (analysis of self-organization in cells, code biology)

Biology is Complex with a capital C. Many parts of a system interact with each other, repair each other. The system cannot be truly understood by breaking itself down into smaller bits as a computer can be.

There is a subdomain of biology called Relational Biology that details how biology contains components that are mutually dependent on each other - there is no "kickstarting biology" in the computational Turing Complete sense because any attempt to simulate those aspects in a computer results in Deadlock, a term when multiple parts of a computer system are attempting to access the same resource and have to wait until the other parts have finished their access. Turing Completeness is just a subset of what is possible for Complex Systems.

AI's are created through an objective function predefined by their creators. Biological (and likely other Complex Systems) contain models of themselves, of which they derive meaning (see the field of semiotics, or the concept of an Umwelt) from their sensory input and act towards a better environment.

It has been demonstrated that problems arrive when you incorporate self-reference within formal systems (Gödel's Incompleteness theorem). However, Complex Systems are able to use this self reference to modify both themselves and their surrounding environment.

We can enumerate the axioms that a computer system works with (ultimately boolean logic and derived rules), but there has yet to be such an axiomatic system to define the workings of Biological Complexity. Even if we were able to do so (at whatever resolution you care, say at the femto-scale ), true statements about a systems will still always exist that cannot be proven by those same rules you've defined (Gödel's Incompleteness Theorem again). Rosen's formulation for biology has been termed The Gödel's Incompleteness Theorem for biology.

But then the following lines of thought come to be: "you cant prove that. Large language models are so good and are only getting better, how do you __know__?"

My response will always be "I dont know", I'm also not well versed in those fields to really give you those answers. But also, what necessitates Qualia and Consciousness to exist at all for these behaviors?

I'll finish with the first verse of the first chapter from the Lao Tzu's Tao Te Ching, written ~400BC:

The tao (The Way) that can be told

is not the eternal Tao.

The name that can be named

is not the eternal Name.

The unnamable is the eternally real.

Naming is the origin

of all particular things

What do you think of Ayn Rand’s view of man’s consciousness?

“Man’s distinctive characteristic is his type of consciousness - a consciousness able to abstract, to form concepts, to apprehend reality by a process of reason
[The] valid definition of man, within the context of his knowledge and of all mankind’s knowledge to-date [is]: ‘A rational animal.’

(‘Rational,’ in this context, does not mean ‘acting invariably in accordance with reason’; it means ‘possessing the faculty of reason.’ A full biological definition of man would include many subcategories of ‘animal,’ but the general category and the ultimate definition remain the same)”

Yup. I share this line of reasoning and have based on it a worry that we can’t expect to recognize alien life if we encounter it in space. We are only good at recognizing mammals, and then decreasingly less good as we go farther and farther out. The only thing that usually tips us off is if it moves, but you mentioned plants
 plants move, but only slowly. It doesn’t trigger that same reaction unless we can speed up a video. And yet we persist in looking for alien life. 😬

I suspect one major issue for AI though is that it’s a simulation, specifically of us, except it can never *be* us because we exist in an analog world. So I think AGI is dead in the water (unless we stop thinking we can simulate our own brains digitally). No simulation can ever truly capture reality.

And sometimes I think about why some people think things like ChatGPT are already conscious. My personal untested take is that it probably seems conscious to them because it’s SMARTER than they are. The average overall intelligence level of the data set is probably higher than the population average. 😂

Terrifying for all the reasons we cannot think of.

If God is the only consciousness that exists, and we are created from this conscious imagination, are we really conscious or just mimicking the creator's consciousness?

“Similarly, I have no assumption that large language models are conscious. They are using a lot of complexity to predict the next letter or word.”

They are a black box that are trained to output the next word. That doesn’t mean that this is all that goes on in their “heads”. If I train humans on this prediction task they would still be conscious beings.

Excellent thoughts, Lyn. I should have guessed you were philosophically minded!

I don’t think a material explanation of of the “lights on” of a first person perspective is possible. The only successful accounts of consciousness to my mind are those that acknowledge it to be primary and not derived from material processes. Still we “make” conscious objects all the time. Humans have been making babies as long as there have been humans.

I see no reason why a conscious first person perspective might not be present in an AI. We should prolly err on the side of presuming consciousness where it cannot be proven and acting with respect.

Even if there isn’t a first person “lights on” in animals or AI but merely the illusion of one, treating them badly based on that unprovable assumption reflects poorly on the ethics of anyone who does it.

Me, after reading this note:

I cannot attribute to the AI’s assembled complexity the same type appetites (passions) that I attribute to a complexity that has grown into existence, such as a living being. If I could understand that some appetite was driving AI’s action in accordance with its intentions, then I would look for that which thwart’s AI’s intention to reveal the suffering which might emerge. Until then, it makes more sense to understand the AI’s action in terms of it being a tool, or a weapon, wielded by the intentions of others.

That Qualia is a mystery only to scientists and philosophers reveals what is wrong with science and philosophy. Science is a rationalization of technology (i.e., the arts, the products of humanity) and a rationalization of the instruments we create to measure what theories tells us is responsible for our experience. In my way of thinking, reason is a continuation of passion by other means. The experience itself, and everything in culture that affects the experience, is all we have to share with each other and in that sharing, new things emerge. Thank you for sharing.

Captain Picard’s speech in Measure of a Man sums up my position. If it seems sentient, treat it as sentient.

https://youtu.be/ol2WP0hc0NY

Thanks for sharing your thoughts on this subject, it was a good read. I would like to propose an alternative definition of consciousness for you to consider. I have a much more narrow view of what it is. I believe it is the sense of self. Having the concept of "I".

I would recommend the book The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes. He has a very thought provoking theory about how we became conscious. In his research, he found evidence that we may have only been conscious for the past 2000 years so. Believed that we invented consciousness as a way to move forward as a species. Think of it as a software upgrade rather than a hardware upgrade to the brain. His research also leads to a possible explanation for the existence of gods and religion in human culture. He proposes that they were manifestations of how the preconscious mind functioned.

If he is correct and consciousness is learned then it may be possible for AI systems to learn it from us at some point. I think the one thing that Al can never have is humanity. We are a uniquely imperfect result of evolution. So much of what we do is driven by emotions and the chemical structure of our brains. We take the apparent stability of those systems for granted until something goes wrong and the fragility is revealed by brain damage or chemical imbalances. What we know as feelings and emotions could only be simulated by a computer because it would lack the biochemical structure that makes us behave as we do. I'm not saying that we would want them to have emotions like us. That could result in the most dangerous outcome. I don't think anyone wants a super-intelligent being that acts like a human. Most sociopaths are intelligent after all. I think it is impossible to fathom what a conscious AI system would even look like. If it managed to come to be we wouldn't even be able to relate to its thought processes.

Just some thoughts inspired by yours.

I address this issue in the final book of my Europa Trilogy. Not with AI but if we were to encounter alien life how do we judge "humanity" or "sentience".