There’s no upside to being passive on AI concerns. If we get it wrong that could spell the end of us.

Being overly cautious and thinking about the worst is good. If we’re wrong, and it turns out harmless, no harm done in the process.

Reply to this note

Please Login to reply.

Discussion

We have no choice because we can not control what each nation state or individual, for that matter, will continue to do in regards to AI development.

I don’t believe this. I think it’s possible to sign treaties and set up independent monitoring to stave the worst of it. Maybe not everything, but at least have some form of cooperation. We did it with nukes and can try with AI. Better than sitting on our hands doing nothing.

Yes, because treaties are never secretly broken.

Fine, let’s just do nothing, not even try. Just gonna stare at a screen, practice my prompts and make passive aggressive snarky remarks.

I’m not trying to be snarky and I don’t know the answer. I also know calling for a treaty won’t work.

That isn’t how humans normally handle new technology. Normally, humans innovate more-or-less freely. New technologies create new problems. Humans cope with or mitigate those problems. Rarely are the problems solved unless the technology is supplanted. Innovation continues.

Yes. So perhaps the answer is to keep innovating and pushing the technology in order to counter balance the AI.

This doesn’t take in the consideration a system that is smarter than us. Imagine if an ant tried to control a human. Do you see a scenario where the ant devised a smart enough plan to keep you and me in check?

The sort of AI contemplated in that thought experiment isn’t even on the horizon.

The “godfather” of AI thinks it may be within 5 years. You know something he doesn’t?

I know in a few months his estimate will drop in 1/2.

I didn’t watch the video, so I don’t know what he said. But computers today don’t have desires and don’t understand what they’re doing. I don’t see that changing in the foreseeable future.

That’s dangerous thinking. I’m assuming it happens soon.

If you can develop a measurable prediction, I might be willing to bet. I gather you think that something like Lt. Cmdr. Data is going to exist in less than three years?

I don’t think anything, I don’t know. Nobody knows. Most predictions people make are off, but there is a possibility of them going the other direction and coming sooner than later.

I was responding to #[2]​, who said he assumed computers would have desires and understanding soon, and thought the “father of AI” would revise the time limit of his prediction to 2.5 years in a few months.

Noticed afterwards 😉. Sometimes it looks like a person is replying to me.

The growth is exponential on a log chart.

If being passive means not impeding AI research and development, then the upside is not preventing or slowing down the good things that AI can bring to us.

Not totally impeding but having more cooperation and some reasonable plan to follow that might limit the harms. I don’t know what that might look like but it has to start somewhere.

Hi

The most realistic and most immediate threat that these language models pose is an extension of all the problems we've just experienced with "big tech" having too much power over society. Except this is probably going to give them an order of magnitude more power than they currently have.

The only way I can see to deal with that is build open source language models and an organizational structure to distribute the inputs (computation) and outputs (sats).

Google seems to be leaning into it and catching up. Just watched highlights from Google IO. Overall boring.

That reminds me, I'm speaking about nostr at a Google conference after Miami. Got to think about what to say.

OpenAI may release a [nuclear weapon] and now all other companies need to follow

What do you think of this approach?

https://www.linkedin.com/posts/nicholasxthompson_the-most-interesting-thing-in-tech-the-ai-activity-7061855803476996096-iYUO

I must say, I think there are parts of this idea which is smart.