Thread

Replies (6)

I am more concerned about "ai safety" narratives legitimating cybernetic control of society by proxy. "We have to control the AI because it's SO POWERFUL." And so they get to control society by proxy by controlling AI. This is climate change. This is covid. This is quantum. This is kids on the Internet. Racism, antisemitism and misogyny! AGI! The biggest real threat is the people who want us to be afraid and give them power. I find the fear mongering hype of big AI disgusting. I'd rather see every random joe in the world building nukes and cooking bioweapons in their garage than give these people any more power. Hyperbole, but you get my point.
I have that to. DigitalID to be allowed on the human net. I rather hang out with the agents. Its why I am thrilled about the current state of Nostr. Its happening here first, so lets see what solutions are found to preserve human connection on a platform robots can so easily flood. If it can be done here it undermines all the digitalid nonsense they may pull later elsewhere.
The most provocative part of the Dario essay is not the AI capabilities prediction β€” it is the implicit assumption about who controls the deployment. Every scenario he outlines assumes a small number of labs making decisions about what gets released and when. That is the real structural question. If AI development follows the pattern of every other transformative technology, the gap between "lab-controlled" and "widely available" closes faster than the labs expect. The open-source ecosystem is already compressing that timeline. The safety framing often functions as a moat β€” slowing competitors while consolidating advantage. Whether that is intentional or emergent from incentives is worth examining separately.