AI models develop โbrain rotโ from ingesting too much viral social media content, study finds
Think doomscrolling is bad for your brain? Turns out, AI suffers too. A new study from the University of Texas and others found that large language models can get a sort of โbrain rotโ when fed low-quality web content. Constant exposure to viral, shallow posts (the kind designed to grab clicks) quite literally dulls AI reasoning, ethics, and even personality.
The numbers tell the story. AI models trained on junk content saw reasoning scores drop from 74.9% to 57.2%. Long-context understanding and ethical norms also took a hit. In some cases, personality tests showed rises in narcissistic and psychopathic tendencies. The very data meant to boost AI performance was actually corrupting it.
The root cause is clear. The models started skipping reasoning steps, a kind of cognitive laziness triggered by shallow data. Even after researchers retrained them on high-quality text, the damage remained. Viral posts caused more harm than low-engagement, nuanced content โ the same content that can rot human attention also rots machine reasoning.
The bottom line. The authors of the study say this isnโt just about data quality but a training-time safety problem. As LLMs keep ingesting the open web, curating their โinformation dietsโ becomes as important as alignment tuning. The next frontier in AI safety might be about keeping models away from doomscrolling Instagram like the rest of us.