
If you think scrolling the internet all day is making you dumber, just imagine what it’s doing to large language models that consume a near-endless stream of absolute trash crawled from the web in the name of “training.” A research team recently proposed and tested a theory called “LLM Brain Rot Hypothesis,” which posited that the more junk data is fed into an AI model, the worse its outputs would become. Turns out that is a pretty solid theory, as a preprint paper published to arXiv…








