Classic
There is only one god, and his name is Death. And there is only one thing we say to Death: 'Not today'
image
GM Go Move
AI models develop โ€˜brain rotโ€™ from ingesting too much viral social media content, study finds Think doomscrolling is bad for your brain? Turns out, AI suffers too. A new study from the University of Texas and others found that large language models can get a sort of โ€œbrain rotโ€ when fed low-quality web content. Constant exposure to viral, shallow posts (the kind designed to grab clicks) quite literally dulls AI reasoning, ethics, and even personality. The numbers tell the story. AI models trained on junk content saw reasoning scores drop from 74.9% to 57.2%. Long-context understanding and ethical norms also took a hit. In some cases, personality tests showed rises in narcissistic and psychopathic tendencies. The very data meant to boost AI performance was actually corrupting it. The root cause is clear. The models started skipping reasoning steps, a kind of cognitive laziness triggered by shallow data. Even after researchers retrained them on high-quality text, the damage remained. Viral posts caused more harm than low-engagement, nuanced content โ€” the same content that can rot human attention also rots machine reasoning. The bottom line. The authors of the study say this isnโ€™t just about data quality but a training-time safety problem. As LLMs keep ingesting the open web, curating their โ€œinformation dietsโ€ becomes as important as alignment tuning. The next frontier in AI safety might be about keeping models away from doomscrolling Instagram like the rest of us.
Looking for a sign here's your sign GM Go Move
โ€œThe possession of anything begins in the mind.โ€ โ€“ Bruce Lee image