Can someone explain to me how people reason about the dangers of AGI? Why a program that guesses the next word of the given text would suddenly, or gradually, develop a motivation to do something its not "supposed to" do? Or feel an emotion, which sounds particularly nuts to me, like my depressed spaghetti python script will make an appointment with a psychiatrist.

It is confusing we even call it Intelligence and not A simulation **of** Intelligence

To give a metaphor, if you yell into a cage it will generate new, somewhat different sound, which will look like there's another someone there, but you surely won't think the cage is intelligent, let alone will be able to develop the capacity to feel if you feed it with enough yelling

Reply to this note

Please Login to reply.

Discussion

> Can someone explain to me how people reason about the dangers of AGI?

I think we should always be wary of new technology and its implications on society. May it be AGI or whatnot. AGI might neither be intelligent nor malicious but the people who develop it and the policies around it certainly can be.

Agreed, there will always be malicious people and and policies to fight against

To me it’s not so much the dangers of AGI or even on how we define AGI.

It’s more about the speed of automation and how it can affect the economy and society in a way that we don’t fully comprehend yet.

Still feels like this automation will be more or less scoped to a digital world, it’s true that I can’t comprehend its spread to different physical areas

Yeah,

I think it’s really interesting to think how the usual advice of previous generations to younger generations of getting education for a cognitive desk job as a way of securing your future doesn’t seem as applicable to future generations anymore. 🤔

Agree 💯

It's not the AI that I fear, it's the realistic fake and undetectable spam that I worry about. Fortunately I think Bitcoin and nostr, with the ability to add micro payments to everything will be needed to put a cost to spam.