NEW: Cost to 'poison' an LLM and insert backdoors is relatively constant. Even as models grow.
Implication: scaling security is orders-of-magnitude harder than scaling LLMs.

Prior work had suggested that as model sizes grew, it would make them cost-prohibitive to poison.

So, in LLM training-set-land, dilution isn't the solution to pollution.
Just about the same size of poisoned training data that works on a 1B model could also work on a 1T model.

I feel like this is something that cybersecurity folks will find intuitive: lots of attacks scale. Most defenses don't
PAPER: POISONING ATTACKS ON LLMS REQUIRE A NEAR-CONSTANT NUMBER OF POISON SAMPLES https://arxiv.org/pdf/2510.07192