←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 1 comments | | HN request time: 0.241s | source
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
askafriend ◴[] No.45658899[source]
If it conveys the intended information then what's wrong with that? You're fighting a tsunami here. People are going to use LLMs to help their writing now and forever.
replies(12): >>45658936 #>>45658977 #>>45658987 #>>45659011 #>>45660194 #>>45660255 #>>45660793 #>>45660811 #>>45661637 #>>45662211 #>>45662724 #>>45663177 #
binary132 ◴[] No.45658936[source]
The brainrot apologists have arrived
replies(1): >>45658969 #
askafriend ◴[] No.45658969[source]
Why shouldn't the author use LLMs to assist their writing?

The issue is how tools are used, not that they are used at all.

replies(4): >>45660277 #>>45661374 #>>45661646 #>>45662249 #
1. dwaltrip ◴[] No.45662249[source]
The paragraph in question is a very poor use of the tool.