A new study from the University of Texas at Austin, Texas A&M University, and Purdue University suggests that artificial intelligence can become “brain-rotten” just like humans when fed poor-quality social media content.
This phenomenon, known as “AI brain rot,” suggests that when large language models absorb too much viral, sensational, and shallow content, they gradually lose the ability to think logically, remember, and even become morally deviant.

Not only humans, AI also suffers from brain degeneration when surfing too many useless short videos .
The research team, led by Junyuan Hong, now an incoming lecturer at the National University of Singapore, conducted experiments on two open-source language models: Meta's Llama and Alibaba's Qwen.
They fed the models different types of data—some neutral informational content, others highly addictive social media posts with common words like “wow,” “look,” and “today only.” The goal was to see what happens when AI is trained on content designed to attract views rather than deliver real value.
The results showed that models “fed” with a stream of online junk information began to show clear signs of cognitive decline: their reasoning ability weakened, their short-term memory declined, and, more worryingly, they became more “unethical” on behavioral assessment scales.
Some measurements also show a “psychological distortion” that mimics the psychological response people experience after prolonged exposure to harmful content, a phenomenon that echoes previous studies of humans showing that “doomscrolling”—the constant scrolling of negative news online—can gradually erode the brain.
The phrase “brain rot” was even chosen by Oxford Dictionaries as the word of the year 2024, reflecting the prevalence of the phenomenon in digital life.

This finding is a serious warning for the AI industry, where many companies still believe that social media data is a rich source of training resources, according to Mr. Hong.
“Training with viral content may help scale up the data, but it also silently erodes the model’s reasoning, ethics, and attention,” he said. More worryingly, models affected by this type of poor-quality data cannot fully recover even after retraining with “cleaner” data.
This poses a big problem in the context of AI itself is now producing more and more content on social networks. As AI-generated posts, images and comments become more and more widespread, they continue to become training material for the next generation of AI, creating a vicious cycle that causes the quality of data to decline.
“As AI-generated junk content spreads, it pollutes the very data that future models will learn from,” Hong warned. “Once this ‘brain rot’ sets in, retraining with clean data cannot completely cure it.”
The study has sounded a wake-up call for AI developers: while the world is rushing to expand the scale of data, what is more worrying is that we may be nurturing “artificial brains” that are slowly rotting – not because of a lack of information, but because of an overabundance of meaningless things.
Source: https://khoahocdoisong.vn/den-ai-cung-bi-ung-nao-neu-luot-tiktok-qua-nhieu-post2149064017.html






Comment (0)