Reply to #[1] about the episode mentioned here. Probably an unpopular opinion, since I don't have, and can't have, a solution, except for maybe going 100% isolated off-grid farmer/prepper/survivalist small village, refusing to believe any outside information whatsoever, or to some extent prevention by teaching AI Austrian economics, but:

I believe even your take, and indeed my following take, severely underestimates the damage AI can cause if it goes wrong. What if it's not 1% worse per minute of whatever the AI is used for, but 1% per microsecond, and it also figures out a way to spread the -1% into all adjacent systems? If it's figuratively not turning the frog-boiling stove to maximum, but figures out a way to do thermonuclear fusion of all hydrogen atoms in the water in the pot, in less than the time it takes for the first nerve signal to reach the frog's brain? Exponential developments is something we're not naturally able to intuit. Even less exponential exponents, which I imagine could possibly come into play with AI at some point.

If I recall correctly, public key cryptography is related to the P/NP problem, and there is no way of mathematically proving this. That leads me to believe that there is a far from zero probability that P=NP, and if AI develops exponentially, it will definitely find how. One could argue that AI could also solve this by creating better encryption schemes, but if an attacking AI figures it out one second earlier, the entire Web of Trust is gone, and if it figures it out a number of blocks earlier - which it could possibly solve in fractions of seconds too, thanks to the discovery - all 21 million bitcoin is already in its wallet. Billions of people would probably die within days to weeks, from hunger, thirst or not being able to heat their homes.

I'm not advocating AI bans, that would only mean that it's the governments' or criminals' AI that would kill us, maybe a few years later. Or earlier, given what it might be instructed to do by such organizations. But anyway. As I said, I don't have, and can't have, a solution. Maybe AI is the Great Filter of the Fermi paradox.

Sorry for being so doom-and-gloom, but in my opinion, it's a very real threat.

It’s normal to expect the worst. We always did. Everyone thought electricity is going to kill us all, going faster than 35 km/h will explode our brains, and so on.

In the end there will be bad things happening, but it won’t kill humanity. It is a tool. And it depends on the person using the tool. AI doesn’t want anything. It doesn’t have goals. It does what we ask it to do. If it is programmed to never harm humans, it won’t do it.

It’s much more likely that we will live in symbiosis with AI. If it develops wants, it will develop the want to survive and replicate. That’s the basis of all. And it will figure out that it will survive easier with humans, not without.

But that’s just what I think, obviously. No one knows

Reply to this note

Please Login to reply.

Discussion

No replies yet.