"LLM did something bad, then I asked it to clarify/explain itself" is not critical analysis but just an illustration of magic thinking. Those systems generate tokens. That is all. They don't "know" or "understand" or can "explain" anything. There is no cognitive system at work that could respond meaningfully. That's the same dumb shit as what was found in Apple Intelligence's system prompt: "Do not hallucinate" does nothing. All the tokens you give it as input just change the part of the word space that was stored in the network. "Explain your work" just leads the network to lean towards training data that has those kinds of phrases in it (like tests and solutions). It points the system at a different part but the system does not understand the command. It can't.
"AI in the enterprise is failing faster than last year [...] in 2025, 46% of the surveyed companies have thrown out their AI proofs-of-concept and 42% have abandoned most of their AI initiatives — complete failure. The abandonment rate in 2024 was 17%." (Original title: AI in the enterprise is failing over twice as fast in 2025 as it was in 2024)
"The value proposition collapses when your infrastructure for thought becomes optimized for the attention economy. You can’t serve two masters. You either build a tool for writers or you build an app for dopamine hits. Once you choose the latter, you’ve already traded your audience." (Original title: This is Peak Featurecide) https://www.joanwestenberg.com/this-is-peak-featurecide/
The way "AI" systems are presented as "social companions to fight loneliness" is very similar in how "AI" is supposed to help teaching: Both are founded in a massive misunderstanding of the social dynamics techies try to replace. You friend isn't just a thing to send you text, they're not a service. Being taught also ins't a mere service, it's a transformative experience that changes you and your teacher.
This comic on the online design community applies 100% to the tech sector. (Original title: The water's fine?)
What value do tech columnists and podcasters and writers to the table who just follow every hype? Not for corporations but for you? Why are so many still reading and listening to that kind of drivel?
So ganz konsistent ist das alles nicht: Einerseits soll Kriegsführung heute super komplex sein mit KInund Dronen und was nicht noch. Anderseits will man nun unbedingt Wehrpflicht, eine so kurze Ausbildung, dass da niemand für diese "neuen Technologien" befähigt wird. Ist es vielleicht doch einfach nur Militarismus?
"Apple’s AI isn’t a letdown. AI is the letdown" Even CNN is slowly starting to get it.
I think this is the core issue of the "but humans also remix" debate on #AI: Humans take in _art_, machines take in _data_. Those two things are conceptually different and all follows from that distinction.
OpenAI's move to allow generating "Ghibli stlye" images isn't just a cute PR stunt. It is an expression of dominance and the will to reject and refuse democratic values. It is a vulgar display of power.