←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 1 comments | | HN request time: 0.001s | source
Show context
Version467 ◴[] No.45665703[source]
So they trained LLM's on a bunch of junk and then notice that it got worse? I don't understand how that's a surprising, or even interesting result?
replies(3): >>45665753 #>>45666033 #>>45667950 #
nazgul17 ◴[] No.45665753[source]
They also tried to heal the damage, to partial avail. Besides, it's science: you need to test your hypotheses empirically. Also, to draw attention to the issue among researchers, performing a study and sharing your results is possibly the best way.
replies(2): >>45665776 #>>45666973 #
1. Version467 ◴[] No.45666973[source]
Yeah I mean I get that, but surely we have research like this already. "Garbage in, garbage out" is basically the catchphrase of the entire ml field. I guess the contribution here is that "brainrot"-like text is garbage which, even though it seems obvious, does warrant scientific investigation. But then that's what the paper should focus on. Not that "LLMs can get 'brain rot'".

I guess I don't actually have an issue with this research paper existing, but I do have an issue with its clickbait-y title that gets it a bunch of attention, even though the actual research is really not that interesting.