←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 2 comments | | HN request time: 0.018s | source
Show context
Version467 ◴[] No.45665703[source]
So they trained LLM's on a bunch of junk and then notice that it got worse? I don't understand how that's a surprising, or even interesting result?
replies(3): >>45665753 #>>45666033 #>>45667950 #
nazgul17 ◴[] No.45665753[source]
They also tried to heal the damage, to partial avail. Besides, it's science: you need to test your hypotheses empirically. Also, to draw attention to the issue among researchers, performing a study and sharing your results is possibly the best way.
replies(2): >>45665776 #>>45666973 #
yieldcrv ◴[] No.45665776[source]
I don’t understand, so this is just about training an LLM with bad data and just having a bad LLM?

just use a different model?

dont train it with bad data and just start a new session if your RAG muffins went off the rails?

what am I missing here

replies(2): >>45665849 #>>45669028 #
1. ramon156 ◴[] No.45665849[source]
Do you know the conceot of brain rot? The gist here is that if you train on bad data (if you fuel your brain with bad information) it becomes bad
replies(1): >>45669490 #
2. yieldcrv ◴[] No.45669490[source]
I don’t understand why this is news or relevant information in October 2025 as opposed to October 2022