←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 1 comments | | HN request time: 0s | source
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
askafriend ◴[] No.45658899[source]
If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.
replies(12): >>45658936 #>>45658977 #>>45658987 #>>45659011 #>>45660194 #>>45660255 #>>45660793 #>>45660811 #>>45661637 #>>45662211 #>>45662724 #>>45663177 #
binary132 ◴[] No.45658936[source]
The brainrot apologists have arrived
replies(1): >>45658969 #
askafriend ◴[] No.45658969[source]
Why shouldn't the author use LLMs to assist their writing?

The issue is how tools are used, not that they are used at all.

replies(4): >>45660277 #>>45661374 #>>45661646 #>>45662249 #
1. SkyBelow ◴[] No.45661646[source]
Assist without replacing.

If you were to pass your writing it and have it provide a criticism for you, pointing out places you should consider changes, and even providing some examples of those changes that you can selectively choose to include when they keep the intended tone and implications, then I don't see the issue.

When you have it rewrite the entire writing and you past that for someone else to use, then it becomes an issue. Potentially, as I think the context matter. The more a writing is meant to be from you, the more of an issue I see. Having an AI write or rewrite a birthday greeting or get well wishes seems worse than having it write up your weekly TPS report. As a simple metric, I judge based on how bad I would feel if what I'm writing was being summarized by another AI or automatically fed into a similar system.

In a text post like this, where I expect others are reading my own words, I wouldn't use an AI to rewrite what I'm posting.

As you say, it is in how the tool is used. Is it used to assist your thoughts and improve your thinking, or to replace them? That isn't really a binary classification, but more a continuum, and the more it gets to the negative half, the more you will see others taking issue with it.