More worried about the threat of “terrorists using AI” being seized upon by government for further restrictions than I am the very real threat you point out tbh.

Bad stuff can and will happen, worse over time as it gets more powerful. Governments can’t control it but that will be their natural reaction and it will fail. Outdated institutions inhabited by weak geriatrics and ideologues of no substance are supposed to guide humanity through this shift?

This is where your anarchist streak should come out Stu. Authority isn’t going to navigate this, people are.

Be really useful to have as much of this open source as possible given the lack of understanding we have.

Not restricted, recognise we’re reaching a new frontier for humanity here for better and worse. People need to solve how we’re going to deal with this, not the state. The more minds the better.

Lest this be the next iteration of nuclear power, held back from humanity because Team America World Police declare war on it.

Reply to this note

Please Login to reply.

Discussion

I think the chances of any government getting out ahead of AI is nil.

They haven’t even figured out how to tax capital, without it escaping every time. They’ve been trying for 2,500 years.

So I do agree, it’s gonna be people who tackle this one.

I think it’s almost impossible for small entities to get sufficient data to train powerful models at the moment. But if the models are open source then small entities will have them.

We are 2 years away from having a personalised AI on your smartphone that knows personal information about you and works to serve your interests, you can shun this if you like but you will have a hard time competing with people who embrace it.

We will also have jailbreak versions of AI that exist on Linux devices. These will be easy to set to nefarious / malevolent tasks and bad people will do that.

Including all kinds of blackmail / fraud / threats / criminal activity you can think of.

By 2030 the world will look very different.

The threat level is going vertical and soft targets are going to get hammered.

I think the chances of any government getting out ahead of AI is nil.

They haven’t even figured out how to tax capital, without it escaping every time. They’ve been trying for 2,500 years.

So I do agree, it’s gonna be people who tackle this one.

I think it’s almost impossible for small entities to get sufficient data to train powerful models at the moment. But if the models are open source then small entities will have them.

We are 2 years away from having a personalised AI on your smartphone that knows personal information about you and works to serve your interests, you can shun this if you like but you will have a hard time competing with people who embrace it.

We will also have jailbreak versions of AI that exist on Linux devices. These will be easy to set to nefarious / malevolent tasks and bad people will do that.

Including all kinds of blackmail / fraud / threats / criminal activity you can think of.

By 2030 the world will look very different.

The threat level is going vertical and soft targets are going to get hammered.

Expect to see more things like this: https://www.beekeeperai.com/company

Public/Private partnerships whether Universities and Industry or the State and Startups. They will print money in to these in an arms race as that’s how state power is bound to react to an existential threat.

When all you’ve got is a hammer.. Statism will go hard!

Not sure why you need a company to do that? Sounds like a technical issue with data schema and DSP layer.

There’s gonna be a full on bubble and bust of AI companies making dumb promises and then being obsolete minutes after raising capital.

Sell shovels.