Educators spend a lot of time focusing on how to restructure assessments because of ChatGPT but less time thinking about why education as a whole is vulnerable to AI in the first place, and what that prompts us to think about restructuring our systems of teaching and learning.
Reliable AI notetaking about "potentially life-changing choices" is sketchy enough, but a social work AI system that "proposes actions that human workers might not have considered" is a profoundly bad idea ☠️ employ more social workers ffs image
I'm at the Centre for Commons Organising tomorrow, asking: Will AI fix public services? If I'm an educator but I don't embrace AI, am I a Luddite? Did Francis Galton invent deep learning? Will AI save us from climate change? Does decomputing mean I have to get rid of my laptop? image
Replace all data centres with common land #PutDownAllMachineryHurtfulToTheCommonality
Via / image
Any educator who embraces AI on the basis that students will need to engage with AI in the world of work has missed the point that AI is a fundamentally anti-worker technology which will immiserate students' prospects of getting work and their experience in workplace itself.
AI* does more for the far right than boost disinformation; it sediments it structurally and institutionally through - algorithmic states of exception - eugenic solutionism - bureaucratic cruelty - diverting from structural solutions [*all current AI, not just the generative kind]
Q. What do AI and the far right have in common?* A. They both want you to think they're inevitable (but they're not). [*Unfortunately this is only one of many resonances between AI and fascistic solutionism - see 'Resisting AI' and other sources for more on that]
Fascinating to see investor unease about gen-AI. Affirms much of our collective critique, but from a toxic standpoint. Also, even if gen-AI flames out, it still leaves the less visible and IMO more violent predictive AI to wreak damage in welfare, healthcare etc.