A study shows that artificial intelligence models trained on viral and clickbait content suffer real cognitive decline. A brain rot effect that not only makes them less intelligent or ethical, but ends up infecting humans too.
Scroll, scroll, scroll short, meaningless videos of cats and ballet, or clickbait news like “Wow, you’ll never believe this” or “Read this, it will change your life.” I am viral and spectacularized content that bombard Internet users by frying their brains in a phenomenon called brain rotliterally “brain rot”, and they also do it with i artificial intelligence neurons who turns stupid, that’s all superintelligence.
A psychopathic and narcissistic AI
A study published by researchers from the University of Texas at Austin and of Purdue University documented the brain rot in the LLMi Large Language Model i.e. deep learning language models, which suffer from cognitive impairment when they are trained on social media content. The experiment was conducted on two models, the Llama Of Half and the Qwen Of Alibabawhich have been fed by researchers with viral videos and clickbait headlines and other superficial information. The result was a collapse of reasoning skillsof collapsed memory and logical coherence. And worse, the models have developed what scholars call dark personality traits: they showed attitudes narcissistic And psychopathsbecoming less ethical in their responses.
The most shocking discovery concerns theirreversibility of the damage. The researchers attempted to retrain the poisoned models with higher-quality data, but failed to fully restore the original performance. Now once the model has undergone this cognitive decline, he remains dazed.
A problem that is not just academic
It is a collapse that does not stop at university research, but is reflected in everyday life. Big technology companies feed their artificial intelligences content from social media designed for engagement, focusing on quantity rather than on quality, convinced that the more data is collected, the more powerful the model becomes. Training on viral content «may seem like a data augmentation operation, but it can corrode reasoning, ethics and care,” he says Junyuan Hongprofessor atNational University Of Singapore who collaborated on the research.
Not only that, the contents generated by artificial intelligence also come reused to train new versions of deep learning models. So a vicious circle: every interaction it gets worse the quality of the answers, because the models start to rely on copies of the original data rather than on original sources. It’s one gradual degenerationas when they are made photocopies of photocopies.
If AI becomes dumber and less ethical, in turn generating content garbagethe consequences fall on the human beings that interface with these models: they are hit again by the brain rotspreading kitten posts and clickbait headlines again and again. If before the concern was that AI could surpass human intelligenceat this point the fear is that it could do so by making the human brain more dull.




