←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 3 comments | | HN request time: 0.001s | source
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
1. Nio1024 ◴[] No.45664417[source]
I think using large language models really accelerates mental atrophy. It's like when you use an input method for a long time, it automatically completes words for you, and then one day when you pick up a pen to write, you find you can't remember how to spell the words. However, the main point in the article is that we need to feed high-quality data to large language models. This view is actually a consensus, isn't it? Many agent startups are striving to feed high-quality domain-specific knowledge and workflows to large models.
replies(2): >>45664460 #>>45668289 #
2. malfist ◴[] No.45664460[source]
Also if you've built the perfect filter for context haven't you just built a real ai?
3. conartist6 ◴[] No.45668289[source]
And if they need to keep their own output out of the system to avoid model collapse, why don't I?

There's this double standard. Slop is bad for models. Keep it out of the models at all costs! They cannot wait to put it into my head though. They don't care about my head.