←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 1 comments | | HN request time: 0.385s | source
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
mvdtnz ◴[] No.45662241[source]
What is actually up with the "it's not just X, it's Y" cliche from LLMs? Supposedly these things are trained on all of the text on the internet yet this is not a phrasing I read pretty much anywhere, ever, outside of LLM content. Where are they getting this from?
replies(1): >>45672010 #
1. kalavan ◴[] No.45672010[source]
It's probably getting amplified by the RLHF stage because the earlier models didn't do that.

But that just shifts the question to "what kind of reviewer actually likes 'it's not just X' cliche?" I have no idea.