I see that Fernando Pereira is donating to OpenReview (with a public announcement) because he apparently cares about "open science" or "open research" or some such. That's news to me.
He is the Google VP who wrote a condescending, bizarre, what was meant to be anonymous, "privileged and confidential" letter which was then sent to the HR department, instructing me and my coauthors to retract our Stochastic Parrots paper.
Profile
npub1z40l...s3fq
npub1z40l...s3fq
Wildly different things, tasks, techniques, subspecialties being lumped into "AI" and then being conflated with each other, doesn't help. Different types of models vs the techniques to train them vs the tasks they are supposed to accomplish, all being bucketed under "AI", is misleading. This is why @Prof. Emily M. Bender(she/her) and @Alex Hanna say to name the specific thing being discussed rather than calling it "AI". π§΅
Sherrilyn Ifill, president of the NAACP Legal Defense Fund, writes that my collaborator and I warned about these eugenicists a few years ago and were called all sorts of names. As she writes, we were "treated as extremists and conspiracy theorists." Even so-called "critics" of large language models came for me unsummoned as I spoke up about this. Marcus and LeCun were on the same side on this one, the side of attacking me for calling out these eugenicists.


Is It Too Late?
No. But We Must Better Understand the Nature of the Battle
When this "AI" bubble pops, the men pretending they weren't pushing the hype, like "critics" whose position is "AGI is real but LLMs aren't the way," who were in eugenicist and "AI existential risk"π circles, will get specials discussing what they saw coming, when its the women who actually told you so.
And then it will be rinse and repeat, onto the next grift.
Friends, given what is happening in the US, take a bit of time to read this wonderful article about Ariel Koren and
Respond Crisis Translation where @Alex Hanna and I are on the board. Ariel was pushed out of Google for uncovering Project Nimbus, Google's collaboration with the Israeli military on cloud services and "AI."
RCT is on the frontlines of protecting immigrants being hunted down around the country. Please support RCT if you have the money or tell others in your .


San Francisco Examiner
SF nonprofit translates for tasks big and small
Translation group finds interpreters for those in need.
I am currently having the displeasure of watching Peter Thiel's keynote at the 2013 "Effective Altruism (EA) Summit".
Yes. I too didn't understand what Thiel and "altruism" have to do with each other, but it made sense after I knew that EAs are super effective at the opposite of "altruism".
I can't believe it is already our FOURTH Anniversary. Happy anniversary to us ππ π
We have a small but mighty team. Each of these projects could be institutes of their own. I've been working at DAIR twice as long as I was as the two long years at Google, and it flew by because that's what happens when you love your work and are surrounded by great people π₯°
Check out therojects we're working on at www.dair-institute.org/projects/


There is a movement to "prove that the datacenter water issues is fake". If you venture into Muskrat's hell site, you can see the community note these people put on this great piece of investigative journalism, accusing them of unfairly implicating datacenters.
How Oregonβs Data Center Boom Is Supercharging a Water Crisis
Amazon data centers constructed in eastern Oregon's farmland have worsened a water pollution problem thatβs been linked to cancer and miscar...
My 2 cents on so-called Artificial General Intelligence (AGI).
AGI is a fictional, undefined machine god, which is causing harm and goes against the backbone of engineering principles.
Its near "super intelligence" "replacing professionals" in the PR rounds, 60 minutes and such, then once everyone is convinced and uses them for legal and medical advice, following the deceptive marketing, they slip it in the terms of service that you shouldn't do that.
By @npub1v9aa...5z6v and @Prof. Emily M. Bender(she/her)


OpenAI Tries to Shift Responsibility to Users
OpenAI is trying to shift the blame for bad legal and medical advice from its chatbot away from the company and onto users. We agree that no chatbo...