Calculating water/energy usage for "AI" per token is a bit problematic: A data center has a massive base load even if nobody uses it just by sheer existence. And since we have no actual data for any of the popular platforms all numbers floating around are problematic and not very useful. Like how much power does one of those servers+NVIDIA cards really save if its utilization is only 50%? And are the overhead costs actually counted?
This essay by @Baldur Bjarnason on why individual experiments on the usefulness of "AI" (or similar stuff) don't teach us anything useful and might actually harm us is brilliant. Go read it. Too many insights to pull a quote TBH:
"The cult of goal-setting thrives in this illusion. It converts uncertainty into an illusion of progress. It demands specificity in exchange for comfort. And it replaces self-trust with the performance of future-planning." (Original title: Smart People Don't Chase Goals; They Create Limits) https://www.joanwestenberg.com/smart-people-dont-chase-goals-they-create-limits/
"The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities." "The current trajectory of generative AI reflects the priorities of firms seeking to lower costs, discipline workers, and consolidate profits — not any drive to enhance human flourishing. If we allow this trajectory to go unchallenged, we should not be surprised when the gains from technological innovation accrue to the few, while the burdens fall upon the many."
Wie schon angekündigt: Ich verkaufe mein altes T14s Gen 3 Thinkpad. Mit Bekannten und Freunden rede ich natürlich auch nochmal über Preise, habe einfach so geschaut, für was die Maschine sonst so weg geht.
"The process of coding with an “agentic” LLM appears to be the process of carefully distilling all the worst parts of code review, and removing and discarding all of its benefits." Very insightful post on #GenAI by Glyph (Original title: I Think I’m Done Thinking About genAI For Now)
"So let’s get one thing out of the way: I think “AI literacy” is a dangerous device of neoliberal education and it deserves to be dismissed out of hand." "Using AI is not about communicating. It’s about avoiding communicating. It’s about not reading, not writing, not drawing, not seeing. It’s about ceding our powers of expression and comprehension to digital apps that will cushion us from fully participating in our own lives."
With so much hype and recent articles on "AI for coding" and how everyone not doing it is dumb maybe this is a good time to relink my article on "Vibe Coding". Which I think focuses purely on "output" when developing or creating something is not just about the output.
After the whole "I asked ChatGPT" as talk opener I've recently seen a lot of "Look, my kids are using AI to build their own games and that's beautiful" stuff in presentations. Makes me sad that instead of wanting kids to learn how to build something they get taught to accept what the kinda-passable code generator craps out. What they learn is not how to conceptualize or build something, what they learn is that shit comes from nowhere if you just match your expectations to the output afterwards.
"As with many digital [vibe coding] doesn't fully stand up to scrutiny but show a deep misunderstanding of how software is made, the potential externalities (and internalities) software brings and a disdain for experience and embodied knowledge." (Original title: On “Vibe Coding”)