NEW: Cost to 'poison' an LLM and insert backdoors is relatively constant. Even as models grow.
Implication: scaling security is orders-of-magnitude harder than scaling LLMs.
Prior work had suggested that as model sizes grew, it would make them cost-prohibitive to poison.
So, in LLM training-set-land, dilution isn't the solution to pollution.
Just about the same size of poisoned training data that works on a 1B model could also work on a 1T model.
I feel like this is something that cybersecurity folks will find intuitive: lots of attacks scale. Most defenses don't
PAPER: POISONING ATTACKS ON LLMS REQUIRE A NEAR-CONSTANT NUMBER OF POISON SAMPLES https://arxiv.org/pdf/2510.07192
Prior work had suggested that as model sizes grew, it would make them cost-prohibitive to poison.
So, in LLM training-set-land, dilution isn't the solution to pollution.
Just about the same size of poisoned training data that works on a 1B model could also work on a 1T model.
I feel like this is something that cybersecurity folks will find intuitive: lots of attacks scale. Most defenses don't
PAPER: POISONING ATTACKS ON LLMS REQUIRE A NEAR-CONSTANT NUMBER OF POISON SAMPLES https://arxiv.org/pdf/2510.07192
Proponents say age verification = showing your ID at the door to a bar.
But the analogy is often wrong.
It's more like: bouncer photocopies some IDs, & keeps them in a shed around back.
There will be more breaches.
But it should bother you that the technology promised to make us all safer, is quickly making us less so.
STORIES:

Dictators will inevitably demand that Apple build the same access structure for them.
They insert vulnerable bad things right at the place where we need the strongest protections.
This latest attempt to demand access is *yet another* unreasonable, secret demand on Apple (a TCN) from the Home Office....