I think we are going to see AI-assisted hacking, data exfiltration and sabotage, which might result in a complete cybersecurity crisis, which could have wide ranging geopolitical and global stability implications.

I think this is a current horizon threat. Models like GP4, and in particular, multimodal systems that demonstrate the capacity of these models to use tools, tells me this threat is already here. It's only a matter of time until we witness the first major incident.

Reply to this note

Please Login to reply.

Discussion

Counter-AI is necessary. Critical to open source AI development.

AI safety and countermeasures are lagging so far behind, and advancing so much more slowly than the models themselves. There seems to be little to no hope, given we are now in an global AI prisoner's dilemma that we are going to slow down to catch up, here.

Do libraries have safety measures so you can’t study the materials in a way that could bring harm? What safety measures do you mean?

I don't think it's unreasonable to worry that such an incident, if perpetrated on national security infrastructure, could trigger a war or some massive catastrophe. People waiting for killer robots or AGI, have their head in the sand. The current technology that we already have, has horrifying capacity for these kinds of threats.

👀

Using AI to find potential security vulnerabilities in open source code for nefarious purposes will become popular in the intelligence communities, IMO.

Isn’t it also a tool to make the open source softwares more secure? Technically AI or not open source always had that strength/weakness.

@DerekRoss @brockm Not just open source code, either. A lot of closed source software has source available clauses for large customers such as governments, and even when they don't, I'd question the competency of a spy service that isn't trying to get someone to leak them the source code of applications that are important to an enemy's abilities.

Luckily we also have the AI to help us fix it

No doubt about it. But it’s likely that state actors already have the capacity beyond what models like GP4 are currently capable.

Especially using these AutoGPTs it is sought of inevitable, people will need to start looking for solutions to protect their important data. Could use Bitcoin ?

Tbf hacking is quite common already. Companies have relied on third party code mostly written by interns a decade ago.

It’s painful but many deserve being hacked.

I was selling secure MCUs and authentication devices for years, most companies (even makers of ATMs) where like „I’m not gonna pay for this, I’ve not been hacked so far.“

Long run AI will help massively improve security.

If the typical consumer has access to these AI systems, you have to wonder what kind of AI tools militaries and intelligence agencies have.

Yeah, I think government agencies are already balls-deep into using AI to discover zero days and help plan a hack. Getting this into the hands of APT groups will be…. interesting.