←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 2 comments | | HN request time: 0.536s | source
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
standardly ◴[] No.45660532[source]
That is indeed an LLM-written sentence — not only does it employ an em dash, but also lists objects in a series — twice within the same sentence — typical LLM behavior that renders its output conspicuous, obvious, and readily apparent to HN readers.
replies(15): >>45660603 #>>45660625 #>>45660648 #>>45660736 #>>45660769 #>>45660781 #>>45660816 #>>45662051 #>>45664698 #>>45665777 #>>45666311 #>>45667269 #>>45670534 #>>45678811 #>>45687737 #
b33j0r ◴[] No.45662051[source]
I talked like that before this happened, and now I just feel like my diction has been maligned :p

I think it’s because I was a pretty sheltered kid who got A’s in AP english. The style we’re calling “obviously AI” is most like William Faulkner and other turn-of-the-20th-century writing, that bloggers and texters stopped using.

replies(1): >>45662108 #
1. dingnuts ◴[] No.45662108[source]
IDK all the breathless "it's not just X, it's Y --" reminds me of press releases
replies(1): >>45662255 #
2. b33j0r ◴[] No.45662255[source]
Yeah it was trained on bullshit more than Faulkner for sure. +1 you.