←back to thread

LLMs can get "brain rot"

(llm-brain-rot.github.io)
466 points tamnd | 1 comments | | HN request time: 0.263s | source
Show context
AznHisoka ◴[] No.45656299[source]
Can someone explain this in laymen terms?
replies(4): >>45656501 #>>45657077 #>>45658026 #>>45666082 #
PaulHoule ◴[] No.45656501[source]
They benchmark two different feeds of dangerous tweets:

  (1) a feed of the most popular tweets based on likes, retweets, and such
  (2) an algorithmic feed that looks for clickbait in the text
and blend these in different proportions to a feed of random tweets that are not popular nor clickbait and find that feed (1) has more of damaging effect on the performance of chatbots. That is, they feed that blend of tweets into the model and then they ask the models to do things and get worse outcomes.
replies(1): >>45657029 #
ForHackernews ◴[] No.45657029[source]
Blended in how? To the training set?
replies(1): >>45660602 #
1. PaulHoule ◴[] No.45660602[source]
Very early training.