Location tracking based on interior pictures. It will be abused to target people. Post the inside your place at your peril. image
Earliest days of vibecoding-as-a-target. Without a radical increase in security, vibecoders will get wiped out & lose their savings. image And their companies will get hit with fat breaches. image Me? I'm waiting for attackers to figure out how to reliably slip backdoors into vibecoded outputs at scale.
Neuroticism? Ripping. Conscientiousness & agreeableness? Dipping. image Via FT:
NEW: πŸ‡©πŸ‡ͺGermany's top court says spyware severely violates fundamental rights. Bans spyware in cases with <3year sentences. Enforces tough proportionality tests on all surveillance. image Restricts spyware to serious cases. Interesting development. image Court says: capturing data at the source (i.e. on someone's phone) is maximally invasive. Especially given how much of our lives happens online. They also surface the security risks to systems from this kind of surveillance. image Watching Germany's highest court grapple with spyware's invasiveness & rights violations is instructive. States wielding spyware without robust legal limitations and tight judicial oversight... are almost guaranteed to be violating their citizens' basic rights. In so many jurisdictions, state secrecy & lack of effective legal challenges means spyware harms happening daily Huge credit to German digital freedoms organization #digitalcourage for bringing this case. Court statement:
Internet-connected microphones in school bathrooms. What could go wrong? image Mandated microphones in private spaces are a bad idea. Throwing invasive sensors into private spaces rarely fixes socially scary problems. But is almost guaranteed to have risky downsides. image Story:
Regular people know that age verification mandates won't work. But they are worried about their children's safety, and they aren't being offered non-dystopian alternatives. image
LLM chat exposures keep on coming. Why? My theory is that these platforms don't do a very good job explaining to users what their public/share features mean. Result: users may think that while something is public that doesn't necessarily mean that anyone is indexing or caching. image Story:
Age verification laws are coming fast. And, from my perspective, opponents are struggling to find impactful messaging to explain to the general public the damage they are about to do to freedom. Or to propose alternate futures that address the underlying anxieties. Sure, most folks that are here on #Nostr intuitively understand the dangers... And nod along when we gesture at the dangers of surveillance overreach. But I worry that the common language for talking about these initiatives typically relies on some priors that are not universally shared outside people that live and breathe concerns about tech. Saying that something is a surveillance dystopia works on me. But not the neighbors. I'm guilty of being inside this language bubble too, and it's hard to escape. Yet, when faced with politicians talking about protecting kids from bad things that parents feel they see right now... I worry that the communities doing pushback are struggling to: 1 -find framing that makes *enough sense* to the vast majority of people that they say 'ok this is net bad' and push back 2- find their own ways to productively connect with the anxieties that politicians are drawing on. E.g. worried parents. 3- offer things that are honest, well meaning alternative paths for the underlying problems Anyone have thoughts on this? #AskNostr
It seems to me like a strong anti-AI view is becoming left / progressive coded. I'd love to understand this better. Anyone have thoughts?
Rhisotope image Sauce: