I have a problem with people using words like conciseness and large calculated systems in the same explanation because they are very different things. I get what you’re saying but just leave the word consciousness out of it. Symbiotic helper systems. In the end they are nothing but math and calculation and we should not confuse them for quantum mechanisms unless we develop room temp quantum computers - which is what I think is happening in neurons.

I see a dystopian potential in your vision simply because humans will do what humans have always done - try to control, war, abuse. There’s definitely a lot of upside to be head but at some significant risk if we don’t mitigate it somehow.

Reply to this note

Please Login to reply.

Discussion

Ask the Chinese scientists specializing in AI. They might achieve superintelligence sometime in the next few months to years.

Define superintelligence

If we think of super intelligence as better than human intelligence at doing cognitive/business tasks, I think we’re closer than we think.

But if we define super intelligence as in is in a robot body able to navigate the world with better than human skills, we’re a long way away.

If anything, most jobs (50% or more within a few years) will be replaced with AI because humans are not doing their jobs right, and there's bias in quite a few from what I'm hearing. This is part of the reason AGI is a big deal for most people.

I'm with you on this line of thought ... if we go with calculated systems, they are not conscious. In the original article, it was more a reference to having layers of IoT networks, that are reactive to our needs, becoming more or less a mirror of our ways of being.

Which I think could be precised as a reflective consciousness. It's not self-aware, but has a symbiotic sense of self in context to a human / humanity.

And I agree, Symbiotic Helper Systems is definitely a more nuanced description of what we're talking about. In another article I wrote, I definitely see our current AI trajectory being more along those lines ...

(https://www.humanjava.com/is-this-conscious-ai-or-just-a-reflection-of-our-own-thinking/)

We're still at the very early stages of describing consciousness & understanding where it originates. But once we figure that out ... I think were going to find it in many un-expected places ... including advanced self - organizing pattern machines like LLMs etc.

It’s definitely happening in the brain. When you go under anesthesia, you’re unconscious. So some sort of receptors are shut off in the process. Some believe it’s the microtubules in the neurons - particularly in the neurons because all cells contain microtubules. And one of the hypotheses is that there is a collapse of wave function happening in microtubules in neurons and that is what gives rise to consciousness. I happen to agree with this theory as it has made the most sense to me so far out of all others. It also coincidentally ties together many other phenomena that seem to suggest there are quantum processes happening in our brain.

So, given this theory, I find it very difficult to believe that any consciousness will arise in AI any time soon. The process that happens in the brain is beyond anything possible with current quantum computers by an order of many magnitudes. I suspect we are 100-400 years away from replicating this process in the brain that would give rise to an actually conscious AGI. Until then, it’s a good mimic at best.

Aviões voam mas não batem asas.