←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 1 comments | | HN request time: 0.217s | source
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
mortenjorck ◴[] No.45670485[source]
This is pretty clearly an LLM-written sentence, but the list structure and even the em dashes are red herrings.

What qualifies this as an LLM sentence is that it makes a mildly insightful observation, indeed an inference, a sort of first-year-student level of analysis that puts a nice bow on the train of thought yet doesn't really offer anything novel. It doesn't add anything; it's just semantic boilerplate that also happens to follow a predictable style.

replies(2): >>45670620 #>>45670698 #
1. ratelimitsteve ◴[] No.45670620[source]
for me it was the word "corpora"