I think the Vibecoding reddit has accidentally stumbled on the best description of vibecoding: It's "roleplay for guys [it is always guys] who want to feel like hackers without doing the hard part". (Source: https://www.reddit.com/r/vibecoding/comments/1mu6t8z/whats_the_point_of_vibe_coding_if_i_still_have_to/ ) #ai #vibecoding image
David Gerard on the "AI doomers"/"rationalists" and their beliefs. In the end they are a bunch of eugenicist, racist losers afraid of death. (Original title: AI doomsday and AI heaven: live forever in AI God)
Maybe the last few days have made more people realize that GitHub is not your friend and not there to facilitate Open Source but to accumulate power for Microsoft. Anything GitHub says or does needs to be understood in that light.
There used to be this deal between Google (and other search engines) and the Web: You get to index our stuff, show ads next to them but you link our work. AI Overview and Perplexity and all these systems cancel that deal. And maybe - for a while - search will also need to die a bit? Make the whole web uncrawlable. Refuse any bots. As an act of resistance to the tech sector as a whole.
One thing that the current exorbitant investments in "AI" show is that the investor class and big tech corporations do not pay enough taxes: If you have billions to set on fire for spicy autocomplete we should take some or all of those to do something useful with.
LLMs/"AI" are conceptually not software but data.
I wrote about frictionlessness and "AI". The essay is admittedly a bit of a weird ride trying to connect a few very distinct thoughts. I hope it's still worth reading.
I got myself a ticket to see [@Mer__edith]( ) in Berlin soon. Get them while they last.
I was talking to someone yesterday (let's call them A) and they had another "AI" experience, I thought might happen but hadn't heard of before. They were interacting with an organization and upon asking a specific thing got a very specific answer. Weeks later that organization claimed it had never said what they said and when A showed the email as proof the defense was: Oh yeah, we're an international organization and it's busy right now so the person who sent the original mail probably had an LLM write it that made shit up. It literally ended with: "Let's just blame the robot ;)". (Edit: I did read the email and it did not read like something an LLM wrote. I think we see "LLM did it" emerging as a way to cover up mistakes.) LLMs as diffusors for responsibility in corporate environments was quite obviously gonna be a key sales pitch, but it was new to me that people would be using those lines in direct communication.
Microsoft Research is at it again: Advait Sarkar, a Microsoft Research employee got a paper published at the CHI Conference on Human Factors in computing systems: "AI Could Have Written This: Birth of a Classist Slur in Knowledge Work" https://dl.acm.org/doi/10.1145/3706599.3716239 Now I am all for calling out structures and language of oppression and discrimination but this is really something special. "AI shaming arises from class anxiety in middle class knowledge workers" ... yeah no. It's about shaming people (usually from the middle or upper class) who don't want to put in actual work but get credit for having done it. Like what is the argument: Lower class people can only compete while using AI and therefore should not be shamed? What kind of a view on lower class people does Mr. Microsoft Research communicate here?