Google's emissions are up over 50%, Amazon builds huge data centers powered by 75% natural gas. Remember all those posts telling us that "AIs climate impact isn't that bad" supported by some really funky math/perspective and/or numbers Sam Altman invented? Here's the actual impact. "AI" is a fossil fuel technology.
Read through @npub1v0sp...tw6h's "signals" proposal and that's ... really weak. Feels like it's just a bit of window dressing to keep to community busy while AI companies take everything they can find. Like, why is that kind of signalling not part of the licenses? The promise of CC licenses was reuse by others (that is _people_) not machines. My stuff is CC because I want other human beings to potentially use it but if I change the license, it is about excluding AI companies from it. I don't want to "signal" (meaning beg) I want to forbid (meaning adding it to the license). TBH: The whole AI shit and the way that OSI and CC have reacted to it have really shown just how poorly thought out a lot of that core infrastructure of the digital commons is.
"I do not need the one magic machine that claims to solve all my issues and then makes me jump through conversational hoops to get a mediocre result. That is actually the opposite of what I need." On chatbots as a bad design paradigm (Original title: β€œChatBot” is bad design)
"You can tell what happened β€” Google promised iNaturalist free money if they would just do something, anything, that had some generative AI in it. iNaturalist forgot why people contribute at all, and took the cash." (Original title: Google bribes iNaturalist to use generative AI β€” volunteers quit in outrage)
I have this idea of building a "Luddite Library". A set of information, tools and processes to harness luddite thinking when analyzing technological developments and "innovation". Something that interested parties could use to understand that there might be a different way to think about what tech is/should be/can be/mustn't be for us. Thin for example sets of questions to use to analyze a new thing being pushed on you and similar tools. I'm thinking about applying for grants to fund this. If anyone has an idea where to propose this, I'd be grateful for a message to: tante+ludditelibrary@tante.cc
Teaching people how to use LLMs is not "upskilling", it's the opposite.
"AI bots that scrape the internet for training data are hammering the servers of libraries, archives, museums, and galleries, and are in some cases knocking their collections offline" #AI is ruining our digital world (Original title: AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums)
New study on the effects of LLM use (in this case on essay writing): Quote: "LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning." The interesting thing is: People who used search engines (to find sources etc) did not show similar issues. This is an important antidote against the belief that LLM-based tools are just like search engines. Which they are not. They are massively degrading their users' mental abilities and development. Which is why these systems have absolutely no place even _near_ any school or university.
In the end it seems to me that one of the main distinctions between people who see LLMs as good and those who don't is whether they see the digital part of the world as "content" or "people". If it's all just content, LLMs make sense. If it's where people live LLMs become a somewhat dumb idea.
If you are also annoyed by Firefox for some reason now trimming the protocol in the address bar you can fix that behavior by setting "browser.urlbar.trimURLs" in about:config to get URLs you can properly copy and use back.