NEW: Cost to 'poison' an LLM and insert backdoors is relatively constant. Even as models grow. Implication: scaling security is orders-of-magnitude harder than scaling LLMs. image Prior work had suggested that as model sizes grew, it would make them cost-prohibitive to poison. image So, in LLM training-set-land, dilution isn't the solution to pollution. Just about the same size of poisoned training data that works on a 1B model could also work on a 1T model. image I feel like this is something that cybersecurity folks will find intuitive: lots of attacks scale. Most defenses don't PAPER: POISONING ATTACKS ON LLMS REQUIRE A NEAR-CONSTANT NUMBER OF POISON SAMPLES https://arxiv.org/pdf/2510.07192
NEW: breach of Discord age verification data. For some users this means their passports & drivers licenses. Discord has only run age verification for 6 months. Age verification is a badly implemented data grab wrapped in a moral panic. image Proponents say age verification = showing your ID at the door to a bar. But the analogy is often wrong. It's more like: bouncer photocopies some IDs, & keeps them in a shed around back. There will be more breaches. But it should bother you that the technology promised to make us all safer, is quickly making us less so. STORIES: https://www.forbes.com/sites/daveywinder/2025/10/05/discord-confirms-users-hacked---photos-and-messages-accessed/
PAY ATTENTION: The UK again asked Apple to backdoor iCloud encryption. Backdoors create a massive target for hackers & criminal groups. image Dictators will inevitably demand that Apple build the same access structure for them. They insert vulnerable bad things right at the place where we need the strongest protections. image This latest attempt to demand access is *yet another* unreasonable, secret demand on Apple (a TCN) from the Home Office....