"AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline" #AI is ruining our digital world (Original title: AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums)
New study on the effects of LLM use (in this case on essay writing): Quote: "LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning." The interesting thing is: People who used search engines (to find sources etc) did not show similar issues. This is an important antidote against the belief that LLM-based tools are just like search engines. Which they are not. They are massively degrading their users' mental abilities and development. Which is why these systems have absolutely no place even _near_ any school or university.
In the end it seems to me that one of the main distinctions between people who see LLMs as good and those who don't is whether they see the digital part of the world as "content" or "people". If it's all just content, LLMs make sense. If it's where people live LLMs become a somewhat dumb idea.
If you are also annoyed by Firefox for some reason now trimming the protocol in the address bar you can fix that behavior by setting "browser.urlbar.trimURLs" in about:config to get URLs you can properly copy and use back.
Calculating water/energy usage for "AI" per token is a bit problematic: A data center has a massive base load even if nobody uses it just by sheer existence. And since we have no actual data for any of the popular platforms all numbers floating around are problematic and not very useful. Like how much power does one of those servers+NVIDIA cards really save if its utilization is only 50%? And are the overhead costs actually counted?
This essay by @npub1hr70...jxyc on why individual experiments on the usefulness of "AI" (or similar stuff) don't teach us anything useful and might actually harm us is brilliant. Go read it. Too many insights to pull a quote TBH:
"The cult of goal-setting thrives in this illusion. It converts uncertainty into an illusion of progress. It demands specificity in exchange for comfort. And it replaces self-trust with the performance of future-planning." (Original title: Smart People Don't Chase Goals; They Create Limits) https://www.joanwestenberg.com/smart-people-dont-chase-goals-they-create-limits/
"The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities." "The current trajectory of generative AI reflects the priorities of firms seeking to lower costs, discipline workers, and consolidate profits — not any drive to enhance human flourishing. If we allow this trajectory to go unchallenged, we should not be surprised when the gains from technological innovation accrue to the few, while the burdens fall upon the many."
Wie schon angekündigt: Ich verkaufe mein altes T14s Gen 3 Thinkpad. Mit Bekannten und Freunden rede ich natürlich auch nochmal über Preise, habe einfach so geschaut, für was die Maschine sonst so weg geht.
"The process of coding with an “agentic” LLM appears to be the process of carefully distilling all the worst parts of code review, and removing and discarding all of its benefits." Very insightful post on #GenAI by Glyph (Original title: I Think I’m Done Thinking About genAI For Now)