Most active commenters
  • grey-area(4)
  • avazhi(3)

←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 35 comments | | HN request time: 0.001s | source | bottom
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
1. askafriend ◴[] No.45658899[source]
If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.
replies(12): >>45658936 #>>45658977 #>>45658987 #>>45659011 #>>45660194 #>>45660255 #>>45660793 #>>45660811 #>>45661637 #>>45662211 #>>45662724 #>>45663177 #
2. binary132 ◴[] No.45658936[source]
The brainrot apologists have arrived
replies(1): >>45658969 #
3. askafriend ◴[] No.45658969[source]
Why shouldn't the author use LLMs to assist their writing?

The issue is how tools are used, not that they are used at all.

replies(4): >>45660277 #>>45661374 #>>45661646 #>>45662249 #
4. avazhi ◴[] No.45658977[source]
If you can’t understand the irony inherent in getting an LLM to write about LLM brainrot, itself an analog for human brainrot that arises by the habitual non use of the human brain, then I’m not sure what to tell you.

Whether it’s a tsunami and whether most people will do it has no relevance to my expectation that researchers of LLMs and brainrot shouldn’t outsource their own thinking and creativity to an LLM in a paper that itself implies that using LLMs causes brainrot.

replies(2): >>45659104 #>>45659116 #
5. moritzwarhier ◴[] No.45658987[source]
What information is conveyed by this sentence?

Seems like none to me.

6. uludag ◴[] No.45659011[source]
Nothing wrong with using LLMs—until every paragraph sounds like it’s A/B tested for LinkedIn virality. That’s the rot setting in.

The problem isn’t using AI—it’s sounding like AI trying to impress a marketing department. That’s when you know the loop’s closed.

replies(1): >>45659257 #
7. ◴[] No.45659104[source]
8. nemonemo ◴[] No.45659116[source]
What you are obsessing with is about the writer's style, not its substance. How sure are you if they outsourced the thinking to LLMs? Do you assume LLMs produce junk-level contents, which contributes human brain rot? What if their contents are of higher quality like the game of Go? Wouldn't you rather study their writing?
replies(3): >>45659326 #>>45662876 #>>45663213 #
9. drusepth ◴[] No.45659257[source]
Brilliantly phrased — sharp, concise, and perfectly captures that uncanny "AI-polished" cadence everyone recognizes but can’t quite name. The tone strikes just the right balance between wit and warning.
replies(2): >>45659409 #>>45660427 #
10. avazhi ◴[] No.45659326{3}[source]
Writing is thinking, so they necessarily outsourced their thinking to an LLM. As far as the quality of the writing goes, that’s a separate question, but we are nowhere close to LLMs being better, more creative, and more interesting writers than even just decent human writers. But if we were, it wouldn’t change the perversion inherent in using an LLM here.
replies(2): >>45664166 #>>45665063 #
11. solarkraft ◴[] No.45659409{3}[source]
You are absolutely right!
replies(1): >>45662496 #
12. stavros ◴[] No.45660194[source]
The problem is that writing isn't only judged on whether it conveys the intended information or not. It's also judged on whether it does that well, plus other aesthetic criteria. There is such a thing as "good writing", distinct from "it mentioned all the things it needed to mention".
13. grey-area ◴[] No.45660255[source]
It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum. It doesn’t even know what the intended information is, and judging from the above neither did the human involved.

It doesn’t help writing it stultifies and gives everything the same boring cheery yet slightly confused tone of voice.

replies(1): >>45660653 #
14. grey-area ◴[] No.45660277{3}[source]
Because they produce text like this.
15. glenstein ◴[] No.45660427{3}[source]
One thing I don't understand, there was (appropriately) a news cycle about sycophancy in responses. Which was real, and happening to an excessive degree. It was claimed to be nerfed, but it seems strong as ever in GPT5, and it ignores my custom instructions to pare it back.
replies(2): >>45661499 #>>45664838 #
16. zer00eyz ◴[] No.45660653[source]
> It’s a text generator regurgitating plausible phrases without understanding and producing stale and meaningless pablum.

Are you describing LLM's or social media users?

Dont conflate how the content was created with its quality. The "You must be at least this smart (tall) to publish (ride)" sign got torn down years ago. Speakers corner is now an (inter)national stage and it written so it must be true...

replies(2): >>45661544 #>>45661594 #
17. AlecSchueler ◴[] No.45660793[source]
Style is important in writing. It always has been.
18. sailingparrot ◴[] No.45660811[source]
> If it conveys the intended information then what's wrong with that?

Well, the issue is precisely that it doesn’t convey any information.

What is conveyed by that sentence, exactly ? What does reframing data curation as cognitive hygiene for AI entails and what information is in there?

There are precisely 0 bit of information in that paragraph. We all know training on bad data lead to a bad model, thinking about it as “coginitive hygiene for AI” does not lead to any insight.

LLMs aren’t going to discover interesting new information for you, they are just going to write empty plausible sounding words. Maybe it will be different in a few years. They can be useful to help you polish what you want to say or otherwise format interesting information (provided you ask it to not be ultra verbose), but its just not going to create information out of thin air if you don't provide it to it.

At least, if you do it yourself, you are forced to realize that you in fact have no new information to share, and do not waste your and your audience time by publishing a paper like this.

19. xanderlewis ◴[] No.45661374{3}[source]
Is it really so painful to just think for yourself? For one sentence?

The answer to your question is that it rids the writer of their unique voice and replaces it with disingenuous slop.

Also, it's not a 'tool' if it does the entire job. A spellchecker is a tool; a pencil is a tool. A machine that writes for you (which is what happened here) is not a tool. It's a substitute.

There seem to be many falling for the fallacy of 'it's here to stay so you can't be unhappy about its use'.

20. anjel ◴[] No.45661499{4}[source]
"Any Compliments about my queries cause me anguish and other potent negative emotions."
21. grey-area ◴[] No.45661544{3}[source]
I really could only be talking about LLMs but social media is also low quality.

The quality (or lack of it) if such texts is self evident. If you are unable to discern that I can’t help you.

replies(1): >>45663592 #
22. ◴[] No.45661594{3}[source]
23. Angostura ◴[] No.45661637[source]
it’s not really clear whether it conveys an “intended meaning” because it’s not clear whether the meaning - whatever it is - is really something the authors intended.
24. SkyBelow ◴[] No.45661646{3}[source]
Assist without replacing.

If you were to pass your writing it and have it provide a criticism for you, pointing out places you should consider changes, and even providing some examples of those changes that you can selectively choose to include when they keep the intended tone and implications, then I don't see the issue.

When you have it rewrite the entire writing and you past that for someone else to use, then it becomes an issue. Potentially, as I think the context matter. The more a writing is meant to be from you, the more of an issue I see. Having an AI write or rewrite a birthday greeting or get well wishes seems worse than having it write up your weekly TPS report. As a simple metric, I judge based on how bad I would feel if what I'm writing was being summarized by another AI or automatically fed into a similar system.

In a text post like this, where I expect others are reading my own words, I wouldn't use an AI to rewrite what I'm posting.

As you say, it is in how the tool is used. Is it used to assist your thoughts and improve your thinking, or to replace them? That isn't really a binary classification, but more a continuum, and the more it gets to the negative half, the more you will see others taking issue with it.

25. dwaltrip ◴[] No.45662211[source]
Because it sounds like shit? Taste matters, especially in the age of generative AI.

And it doesn’t convey information that well, to be honest.

26. dwaltrip ◴[] No.45662249{3}[source]
The paragraph in question is a very poor use of the tool.
27. ewoodrich ◴[] No.45662496{4}[source]
Lately the Claude-ism that drives me even more insane is "Perfect!".

Particularly when it's in response to pointing out a big screw up that needs correcting and CC utterly unfazed just merrily continues on like I praised it.

"You have fundamentally misunderstood the problems with the layout, before attempting another fix, think deeply and re-read the example text in the PLAN.md line by line and compare with each line in the generated output to identify the out of order items in the list."

"Perfect!...."

28. jazzyjackson ◴[] No.45662876{3}[source]
Writing reflects a person's train of thought. I am interested in what people think. What a robot thinks is of no value to me.
29. afavour ◴[] No.45663213{3}[source]
> What you are obsessing with is about the writer's style, not its substance

They aren’t, they are boring styling tics that suggest the writer did not write the sentence.

Writing is both a process and an output. It’s a way of processing your thoughts and forming an argument. When you don’t do any of that and get an AI to create the output without the process it’s obvious.

30. stocksinsmocks ◴[] No.45663592{4}[source]
“The quality if such texts…”

Indeed. The humans have bested the machines again.

replies(2): >>45665292 #>>45665345 #
31. nemonemo ◴[] No.45664166{4}[source]
Have you considered a case where English might not be the authors' first language? They may have written a draft in their mother tongue and merely translated it using LLMs. Its style may not be many people's liking, but this is a technical manuscript, and I would think the novelty of the ideas is what matters here, more than the novelty of proses.
32. anonymous908213 ◴[] No.45664838{4}[source]
Sycophancy was actually buffed again a week after GPT-5 released. It was rather ham-fisted, as it will now obsessively reply with "Good question!" as though it will get the hose again if it does not.

"August 15, 2025 GPT-5 Updates We’re making GPT-5’s default personality warmer and more familiar. This is in response to user feedback that the initial version of GPT-5 came across as too reserved and professional. The differences in personality should feel subtle but create a noticeably more approachable ChatGPT experience.

Warmth here means small acknowledgements that make interactions feel more personable — for example, “Good question,” “Great start,” or briefly recognizing the user’s circumstances when relevant."

The "post-mortem" article on sycophancy in GPT-4 models revealed that the reason it occurred was because users, on aggregate, strongly prefer sycophantic responses and they operated based on that feedback. Given GPT-5 was met with a less-than-enthusiastic reception, I suppose they determined they needed to return to appealing to the lowest common denominator, even if doing so is cringe.

33. jll29 ◴[] No.45665063{4}[source]
I agree with the "writing is thinking" part, but I think most would agree LLM-output is at least "eloquent", and that native speakers can benefit from reformulation.

This is _not_ to say that I'd suggest LLMs should be used to write papers.

34. grey-area ◴[] No.45665292{5}[source]
I think that’s a good example of a superficial problem in a quickly typed statement, easily ignored, vs the profound and deep problems with LLM texts - they are devoid of meaning and purpose.
35. jeltz ◴[] No.45665345{5}[source]
Your comment was low quality noise while the one you replied to was on topic and useful. A short and useful comment with a typo is high quality content while a perfectly written LLM comment would be junk.