I agree and disagree.

On one hand you’re right it’s easier to deliver malware in other ways.

On the other hand often malware is meant to grant remote access for some other agent to control (a human or increasingly more common AI).

APTs are refining their “living off the land” [1] methods so that in the event of network disruption for example they can continue their attack. Deploying a malicious AI model is the pinnacle of living off the land because hardly anybody knows how to interpret the weights of these models (especially traditional security researchers… for now) and they’re capable of autonomous action.

Now, that might mean they deliver the model some other way but I would think the easiest way to infect the broadest population is to poison the common LLM supply chains.

1. https://www.crowdstrike.com/en-us/cybersecurity-101/cyberattacks/living-off-the-land-attack/

Reply to this note

Please Login to reply.

Discussion

I’ve worked in cybersecurity for 10 years, so I’m definitely familiar with LoL. I guess my point is that most hacking groups are financially motivated and pay very close attention to time/effort/money spent trying to breach a target vs payoff.

There’s just not a lot of reason for them to compromise an AI model that is not guaranteed to do what they expect instead of deploying malware via traditional means that is completely deterministic. This includes APTs because most of them are trying to make money.

Nation state groups where money isn’t the motivation is different, and maybe you’re right in that they’d be the ones to carry this sort of thing out. But I’d still argue that with the huge success they have with far simpler means, it’s likely not worth the lift.

And to be clear, I’m definitely NOT advocating to blindly trust any of these models or software 😅