With Sora bringing deepfaked videos back into the public discourse, I am reminded of the C2PA standard, which is one tool for attempting to verify an image— from being taken by a camera to being displayed to a user on a website— has not been generated with AI. As much as I'm a Luddite, this feels like a place where increasing adoption of a tech solution (I guess, if developing and implementing a standard counts as "tech") could genuinely help address a pressing issue.
Just added our first guest zine to the DAIR Zine Library: it's Identity 2.0's AI-Z: conversations about resistance and generative AI. Check it out!! >>> zines.dair-institute.org/ai-z <<< from: www.identity20.org A small detail they did here that I'm thinking a lot about in my own work: "AI use disclaimers" in their work. I've been toying with adding a little handwritten "a human wrote this" image at the bottom of my email signature with a link to more info. Are other folks doing this?
Fascinating to be included here: I stand by what I said— it is natural for anyone to be drawn to escapism and fantastical thinking in response to fear and uncertainty. Billionaires are not immune. I will get on my soapbox to voice my frustration that we are often made to engage on tech billionaires' terms, and add an evergreen reminder that their fantasies aren't better, worthier, or truer than yours. 🧵
Fun announcement: I'll be running a Possible Futures workshop in-person in Seattle in a month! Details are in the event link. I would love to meet anyone else who's in Seattle, let's make art and imagine alternative technofutures :) image
Reading Empire of AI and thinking about how sooo many so-called "AI experts" popping up out of nowhere to help everyone get on board with the "AI revolution" are woefully unaware of who they're parroting, where those messages came from, and towards what end they're building. 🧵
Finally finished Careless People, and first of all: definitely recommend if you want a better idea of how the insides of some of these tech companies function in practice. Also good if you want to get very upset. There's a lot to unpack from the book, but what's really sitting with me right now is Wynn-Williams' insider perspective on what happened with Facebook in Myanmar. 🧵
I've seen this interview making the rounds lately, and I think it's a spot-on take on the kinds of "agentic AI" being pitched today. "There's a profound issue with security and privacy that is haunting this hype around agents and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer" 💯 https://www.youtube.com/live/AyH7zoP-JOg?si=wk3MXi3ghUHP19bx&t=3017
The whole grok white genocide thing*, to me, is a stellar example of why conversations about AI fairness and model de-biasing cannot live exclusively in academia. It's rare that the ego and power struggles that shape technologies are laid *this* bare, and I hope it serves as a reminder that this is happening in far more quietly and insidiously all the time!! *ICYMI:
Ugh: Terrifying on its own, but one moment in particular really gets me: "Robert Califf, who served as FDA commissioner [...] said the agency’s review teams have been using AI for several years now." Using "AI" without being more specific here is some real diabolical sleight-of-hand!
Among other things, terrified of the data privacy implications in the proliferation of AI therapy bots. What a horrifying dataset to be leaked, subpoenaed, or auctioned off! And in many cases, irl therapists can *already* function to surveil and police people in crisis. This could automate and remove even more oversight from that. 🧵 https://www.npr.org/sections/shots-health-news/2025/04/07/nx-s1-5351312/artificial-intelligence-mental-health-therapy