←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 1 comments | | HN request time: 0.2s | source
Show context
avazhi ◴[] No.45658886[source]
“Studying “Brain Rot” for LLMs isn’t just a catchy metaphor—it reframes data curation as cognitive hygiene for AI, guiding how we source, filter, and maintain training corpora so deployed systems stay sharp, reliable, and aligned over time.”

An LLM-written line if I’ve ever seen one. Looks like the authors have their own brainrot to contend with.

replies(12): >>45658899 #>>45660532 #>>45661492 #>>45662138 #>>45662241 #>>45664417 #>>45664474 #>>45665028 #>>45668042 #>>45670485 #>>45670910 #>>45671621 #
az09mugen ◴[] No.45665028[source]
It is sad people study "brain rot" for LLMs but not for humans. If people were more engaged in cognitive hygiene for humans, many of the social media platforms would be very sane.
replies(1): >>45665335 #
jeltz ◴[] No.45665335[source]
What do you base your claim on that people don't study that? I do not follow the research in that area but would find it highly unlikely there was no research into it.
replies(1): >>45675975 #
1. az09mugen ◴[] No.45675975[source]
I did not express myself correctly, but you are kinda right. Expressed more correctly, the point I was trying to make is that the cognitive hygiene seems more mainstream/important for LLMs than for humans. There are studies of course of human "brain rot" such as this one : https://publichealthpolicyjournal.com/mit-study-finds-artifi...

What I am sad about is that some people spend time/worry about balancing some random weights of some LLMs for the sake of some "alignment" or whatever "brain rot". Aren't humans more important than LLMs ? Are we, as humans, that tied to LLMs ?

English is not my native language and I hope I made my point clearer.