←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 8 comments | | HN request time: 0s | source | bottom
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
askafriend ◴[] No.45658899[source]
If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.
replies(12): >>45658936 #>>45658977 #>>45658987 #>>45659011 #>>45660194 #>>45660255 #>>45660793 #>>45660811 #>>45661637 #>>45662211 #>>45662724 #>>45663177 #
1. avazhi ◴[] No.45658977[source]
If you can’t understand the irony inherent in getting an LLM to write about LLM brainrot, itself an analog for human brainrot that arises by the habitual non use of the human brain, then I’m not sure what to tell you.

Whether it’s a tsunami and whether most people will do it has no relevance to my expectation that researchers of LLMs and brainrot shouldn’t outsource their own thinking and creativity to an LLM in a paper that itself implies that using LLMs causes brainrot.

replies(2): >>45659104 #>>45659116 #
2. ◴[] No.45659104[source]
3. nemonemo ◴[] No.45659116[source]
What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?
replies(3): >>45659326 #>>45662876 #>>45663213 #
4. avazhi ◴[] No.45659326[source]
Writing is thinking, so they necessarily outsourced their thinking to an LLM. As far as the quality of the writing goes, that’s a separate question, but we are nowhere close to LLMs being better, more creative, and more interesting writers than even just decent human writers. But if we were, it wouldn’t change the perversion inherent in using an LLM here.
replies(2): >>45664166 #>>45665063 #
5. jazzyjackson ◴[] No.45662876[source]
Writing reflects a person's train of thought. I am interested in what people think. What a robot thinks is of no value to me.
6. afavour ◴[] No.45663213[source]
> What you are obsessing with is about the writer's style, not its substance

They aren’t, they are boring styling tics that suggest the writer did not write the sentence.

Writing is both a process and an output. It’s a way of processing your thoughts and forming an argument. When you don’t do any of that and get an AI to create the output without the process it’s obvious.

7. nemonemo ◴[] No.45664166{3}[source]
Have you considered a case where English might not be the authors' first language? They may have written a draft in their mother tongue and merely translated it using LLMs. Its style may not be many people's liking, but this is a technical manuscript, and I would think the novelty of the ideas is what matters here, more than the novelty of proses.
8. jll29 ◴[] No.45665063{3}[source]
I agree with the "writing is thinking" part, but I think most would agree LLM-output is at least "eloquent", and that native speakers can benefit from reformulation.

This is _not_ to say that I'd suggest LLMs should be used to write papers.