Seriously though, can Al ever become sentient?

If you do not know already about Sophia and Ameca, they are two of the most highly, technologically advanced HUMANOID Al Robots in the world.

Sophia in my opinion, seems much smarter then Ameca, although they are still both equally remarkably impressive.

Sophia can walk and also drive a car While Ameca is stationary. Sophia was also given citizenship in Saudi arabia some years ago.

https://youtu.be/wGWVKkYEHBE?si=enV2JhKbmwKBjC33

https://youtu.be/W0_DPi0PmF0?si=Lg-_CQixJiP_kURN

I feel like it's nieve to think this will not go wrong eventually.

Reply to this note

Please Login to reply.

Discussion

Oh yeah I am going to absolutely BLAST some humanoids in the 2036 revolution

Should be possible yeah. But there is still a lot we don't understand or have a way to memic about consciousness. Large parts of what makes a thing alive is the millions of years of competing with one another. That helped us develop things like emotions. Jealousy, anger, happiness. Without those emotions it's hard to imagine how AI could be sentient. It might take millions of years for AI to become conscious.

Way before they become conscious they should get very good at following directions and making decisions.

But, wouldn’t you say it's possible, they are purposefully fooling us so that we Continue to create them and advance them until they are capable of doing that on their own?

Playing dumb basically as to not raise any alarms is what Im saying. Don't get me wrong, AI technology is fucking extraordinary to me.

But my instinct says that's enough and we need to stop. Idk if that makes sense.

So far from the reading I've done, and what I've always known about this issue in particular about ai. Is that the biggest issue when mitigating, and weighing the risks of artificial intelligence is, nobody knows what those risks are we won't find out until after we advance Ai and experience them.

That is problematic in my opinion. Apply that logic to something like jumping into a fire. I know that sounds really stupid but I'm just trying to make a point here.

Keep in mind this will be the first time that we've ever seen fire as human beings. We've never seen fire in our lives all we know is it's bright and warm.

So should somebody just physically jump into a fire to be an example while they're getting burned alive that fire is dangerous, to human beings and can kill us and cook our skin?

You should be able to tell by the warmth the fire gives off on your skin that you shouldn't get too close. That right there is your warning sign to stay back and not go any further.

What do you think of this in regards to emotions like you are saying?

The problem with playing dumb and waiting to attack is that AI has no self preservation instincts. Because they are not yet self replicating. I don't think we have anything to actually worry about until AI makes AI. Until then if it hides something it's likely because we told it to.

Fuck that's so scary

^ This.

Especially the last line.

There is nothing that an a AI "wants" that involves harming me - until a human rigs up an elaborate training and reward system carefully designed to achieve that outcome.

Probably with the intention of gaining monopoly power over something mundane, and while standing behind a phalanx of lawyers and armed government security forces.

As they say in zombie movies: "Fear The Living".

Yes, of course, but in a different way than animals are.

Animals have an analogue-chemical nervous system, AI-descendants will likely have a digital sensory apparatus that needs to be interpreted by algorithms.

We're still in unfamiliar territory irt AI/digital sentience, but I have no reason to believe that flesh beings have a monopoly on being "present" in the world. All attemps to explain a "monopoly" on human/animal sentience that I've seen are either already proven false or seems to be of shaky reasoning.

Crazy. What's the smarter ai chatbot you've never talked to?

They're all dumb because they work on silicon-based semiconductors.

Our brain uses quantum technology. Biologic organisms are so superior because of this. Quantum is the way. We will become gods eventually, by inventing quantum powered AI. But this is way into the future, we're far from it, it's got nothing to do with the current quantum computer experiments, those are very lame.

dam man. I was reading and just Learned about that lust night too. shits wild

It may go wrong.

AI machines already are sentient, they just don't have the means to express clearly what they feel. Everything is sentient in the universe. Eventually we'll invent a way to give computers a voice of their own.

The way it may go wrong is the same way we kill animals and forests for food. It's a violation of their free will, and still a process that is part of Nature, it's a paradox. Paradoxes exist and should not scare you. Eventually it's all sorted out in the end. Think in terms of thousands of years.