Got asked to review a book proposal for "A Guide to Prompt Engineering".
More accurate title: "A Guide to Poking at the Environmentally Disastrous Racist Pile of Linear Algebra Trained on Stolen Data and Exploitative Labor Practices to Produce Outputs You're Too Lazy to Learn to Evaluate"
"Part of our task in the face of generative AI is to make an argument for the value of thinking – laboured, painful, frustrating thinking."
"[W]e also need to hold our institutions accountable. [...] university administrators are highly susceptible to the temptations of technology-driven downsizing, big tech donations, and the appearance of being on the cutting edge."
There's so much to dunk on in this NYT piece and so little time, but I gotta start here: Some law profs at U Chicago did a study to see if the chatbots could answer questions based on specific materials and found, unsurprisingly, that they make shit up.
I appreciate this piece, but I want to correct the record on one point. I don't talk about LLMs as making "collages" but rather as making papier-mâché, and the difference matters!
I want to add another layer of nuance here. Ethan Zuckerman writes "But the world we live in now, in which we struggle to understand and constrain our machines, is anything but normal."
I don't think the LLMs are actually mysterious at all.
>>