←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 2 comments | | HN request time: 0s | source
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
standardly ◴[] No.45660532[source]
That is indeed an LLM-written sentence — not only does it employ an em dash, but also lists objects in a series — twice within the same sentence — typical LLM behavior that renders its output conspicuous, obvious, and readily apparent to HN readers.
replies(15): >>45660603 #>>45660625 #>>45660648 #>>45660736 #>>45660769 #>>45660781 #>>45660816 #>>45662051 #>>45664698 #>>45665777 #>>45666311 #>>45667269 #>>45670534 #>>45678811 #>>45687737 #
itsnowandnever ◴[] No.45660648[source]
why do they always say "not only" or "it isn't just x but also y and z"? I hated that disingenuous verbosity BEFORE these LLMs out and now it'll all over the place. I saw a post on linked in that was literally just like 10+ statements of "X isn't just Y, it's etc..." and thought I was having a stroke
replies(3): >>45661082 #>>45661187 #>>45662041 #
1. Starlevel004 ◴[] No.45661082[source]
GPT loves lists and that's a variant of a list
replies(1): >>45661150 #
2. wizzwizz4 ◴[] No.45661150[source]
Lists have a simpler grammatical structure than most parts of a sentence. Semantic similarity makes them easy to generate, even if you pad the grammar with filler. And, thanks to Western rhetoric, they nearly always come in threes: this makes them easy to predict!