jsr

jsr's avatar
jsr
npub1vz03...ttwj
Chasing digital badness at the citizen lab. All words here are my own.
I TRUST YOU BUT YOUR AI AGENT IS A SNITCH: Why We Need a New Social Contract We’re chatting on Signal, enjoying encryption, right? But your DIY productivity agent is piping the whole thing back to Anthropic. Friend, you’ve just created a permanent subpoena-able record of my private thoughts held by a corporation that owes me zero privacy protections. image Even when folks use open-source agents like #openclaw in decentralized setups, the default /easy configuration is to plug in an API resulting in data getting backhauled to Anthropic, OpenAI, etc. And so those providers get all the good stuff: intimate confessions, legal strategies, work gripes. Worse? Even if you’ve made peace with this, your friends absolutely haven’t consented to their secrets piped to a datacenter. Do they even know? Governments are spending a lot of time trying to kill end-to-end encryption, but if we’re not careful, we’ll do the job for them. The problem is big & growing: Threat 1: proprietary AI agents. Helpers inside apps or system-wide stuff. Think: desktop productivity tools by a big company. Hello, Copilot. These companies already have tons of incentive to soak up your private stuff & are very unlikely to respect developer intent & privacy without big fights (Those fights need to keep happening) Threat 2: DIY agents that are privacy leaky as hell, not through evil intent or misaligned ethics, but just because folks are excited and moving quickly. Or carelessly. And are using someone’s API. I sincerely hope is that the DIY/ OpenSource ecosystem that is spinning up around AI agents has some privacy heroes in it. Because it should be possible to do some building & standards that use permission and privacy as the first principle. Maybe we can show what’s possible for respecting privacy so that we can demand it from big companies? Respecting your friends means respecting when they use encrypted messaging. It means keeping privacy-leaking agents out of private spaces without all-party consent. Ideas to mull (there are probably better ones, but I want to be constructive): Human only mode/ X-No-Agents flags How about converging on some standards & app signals that AI agents must respect, absolutely. Like signals that an app/chat can emit & be opted out of exposure to an AI agent. Agent Exclusion Zones For example, starting with the premise that the correct way to respect developer (& user intent) with end to end encrypted apps is that they not be included, perhaps with the exception [risky tho!] of whitelisting specific chats etc. This is important right now since so many folks are getting excited about connecting their agents to encrypted messengers as a control channel, which is going to mean lots more integrations soon. #NoSecretAgents Dev Pledge Something like a developer pledge that agents will declare themselves in chat and not share data to a backend without all-party consent. None of these ideas are remotely perfect, but unless we start experimenting with them now, we're not building our best future. Next challenge? Local Only / Private Processing: local-First as a default. Unless we move very quickly towards a world where the processing that agents do is truly private (e.g. not accessible to a third party) and/or local by default, even if agents are not shipping signal chats, they are creating an unbelievably detailed view into your personal world, held by others. And fundamentally breaking your own mental model of what on your device is & isn't under your control / private.
NEW: Microsoft turned over Bitlocker keys to FBI. image When you key escrow your disk encryption with someone, they can be targeted with a warrant. This case is a really good illustration that if you nudge users with a default to save their keys with you... they will do so & may not fully understand the implications. image Of course, once the requests start working... they are likely to accelerate. Story: https://www.forbes.com/sites/thomasbrewster/2026/01/22/microsoft-gave-fbi-keys-to-unlock-bitlocker-encrypted-data/
GOOD MORNING. Today's massive outages nicely illustrate which of your favorite internet things are secretly Amazon-dependent. Specifically on US-EAST-1 Region, which woke up with Main Character Syndrome. Result? Massive outages. Sure, Amazon has regions. image But US-EAST-1 is the legacy/default for a pile of services...and other Global Amazon services also depended on it. So when there was trouble...it was quickly everywhere. Hyperscalers rule *almost* everything around us. And this is absolutely bad news for all sorts of resiliency. image Amazon sez: root cause = DNS resolution with DynamoDB... which a ton depends on. They say they are mostly mitigated & have a pile of backlog to clear. image But this is a great moment to think about just how many eggs that matter are in one basket... https://health.aws.amazon.com/health/status
NEW: 🇰🇵DPRK hackers have begun hiding malware on blockchain. Result, decentralized, immutable malware from a government crypto theft operation. image It only cost $1.37 USD in gas fees per malware change (e.g. to update the command & control server) image Blockchains as malware dead drops are a fascinating, predictable evolution for nation state attackers. image And Blockchain explorers are a natural target. image Nearly impossible to remove. image Experimentation with putting malware on blockchains is in infancy. Ultimately there will be some efforts to try and implement social engineering protection around this, but combined with things like agentic AI & vibe coding by low-information people...whew boy this gold seam is going to be productive for a long time. Still, where here they used social engineering, I expect attackers to also experiment with directly loading zero click exploits onto blockchains targeting things like blockchain explorers & other systems that process blockchains... especially if they are sometimes hosted on the same systems & networks that handle transactions / have wallets. REPORT: https://cloud.google.com/blog/topics/threat-intelligence/dprk-adopts-etherhiding
NEW: Cost to 'poison' an LLM and insert backdoors is relatively constant. Even as models grow. Implication: scaling security is orders-of-magnitude harder than scaling LLMs. image Prior work had suggested that as model sizes grew, it would make them cost-prohibitive to poison. image So, in LLM training-set-land, dilution isn't the solution to pollution. Just about the same size of poisoned training data that works on a 1B model could also work on a 1T model. image I feel like this is something that cybersecurity folks will find intuitive: lots of attacks scale. Most defenses don't PAPER: POISONING ATTACKS ON LLMS REQUIRE A NEAR-CONSTANT NUMBER OF POISON SAMPLES https://arxiv.org/pdf/2510.07192
NEW: breach of Discord age verification data. For some users this means their passports & drivers licenses. Discord has only run age verification for 6 months. Age verification is a badly implemented data grab wrapped in a moral panic. image Proponents say age verification = showing your ID at the door to a bar. But the analogy is often wrong. It's more like: bouncer photocopies some IDs, & keeps them in a shed around back. There will be more breaches. But it should bother you that the technology promised to make us all safer, is quickly making us less so. STORIES: https://www.forbes.com/sites/daveywinder/2025/10/05/discord-confirms-users-hacked---photos-and-messages-accessed/