←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 2 comments | | HN request time: 0s | source
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
1. mtillman ◴[] No.45662138[source]
I recently saw someone on HN comment about LLMs using “training” in quotes but no quotes for thinking or reasoning.

Making my (totally rad fwiw) Fiero look like a Ferrari does not make it a Ferrari.

replies(1): >>45662342 #
2. snickerbockers ◴[] No.45662342[source]
I like to call it tuning, it's more accurate to the way they "learn" by adjusting coefficients and also there's no proven similarity between any existing AI and human cognition.

Sometimes I wonder if any second order control system would qualify as "AI" under the extremely vague definition of the term.