Spent 3 days at a workshop with basically only academic computer scientists and it is scary how many fundamental democratic rules and concepts are put up for debate constantly. CS won't save us. Not one bit.
The response to the predicted crash of the AI sector often is that "every crash leaves something useful behind" and that this time it will be models. I do not think that is the case. AI models age like milk and the infrastructures left behind won't be ones that I see as helpful for democratic societies.
It's so painful to contemplate that Google just shoved their half-baked "AI Overviews" (that nobody asked for) into the search page to juice their "So many people are using our AI numbers" to keep stock market psychopaths happy.
Die ersten Teile des @npub1yyz7...a6kf Programms sind live. Ich kann also schon mal Werbung machen: Ich spreche über Cyberlibertarianismus und warum diese antidemokratische Denkart viele liebgewonnene Dogmen der Netzbewegung durchsetzt.
"We’ve been doing this whole internet thing for a while now, and it’s pretty clear that just about all the metrics are bad. They’ve turned the internet into a game to be won, a system to be gamed, a race to the biggest numbers even when the numbers don’t mean anything. Maybe we’d all be better off without the numbers, but they’re not going anywhere. So all we can do is remember: “views” are not views. Views are lies." (Original title: ‘Views’ are lies)
"I felt the slow loss of competence over time when I relied on [AI], and I recommend everyone to be cautious with making AI a key part of their workflow.[...]When you are using AI, you are sacrificing knowledge for speed." (Original title: Why I stopped using AI code editors)
"LLM did something bad, then I asked it to clarify/explain itself" is not critical analysis but just an illustration of magic thinking. Those systems generate tokens. That is all. They don't "know" or "understand" or can "explain" anything. There is no cognitive system at work that could respond meaningfully. That's the same dumb shit as what was found in Apple Intelligence's system prompt: "Do not hallucinate" does nothing. All the tokens you give it as input just change the part of the word space that was stored in the network. "Explain your work" just leads the network to lean towards training data that has those kinds of phrases in it (like tests and solutions). It points the system at a different part but the system does not understand the command. It can't.
"AI in the enterprise is failing faster than last year [...] in 2025, 46% of the surveyed companies have thrown out their AI proofs-of-concept and 42% have abandoned most of their AI initiatives — complete failure. The abandonment rate in 2024 was 17%." (Original title: AI in the enterprise is failing over twice as fast in 2025 as it was in 2024)
"The value proposition collapses when your infrastructure for thought becomes optimized for the attention economy. You can’t serve two masters. You either build a tool for writers or you build an app for dopamine hits. Once you choose the latter, you’ve already traded your audience." (Original title: This is Peak Featurecide) https://www.joanwestenberg.com/this-is-peak-featurecide/
The way "AI" systems are presented as "social companions to fight loneliness" is very similar in how "AI" is supposed to help teaching: Both are founded in a massive misunderstanding of the social dynamics techies try to replace. You friend isn't just a thing to send you text, they're not a service. Being taught also ins't a mere service, it's a transformative experience that changes you and your teacher.