←back to thread

277 points simianwords | 1 comments | | HN request time: 0.439s | source
Show context
fumeux_fume ◴[] No.45149658[source]
I like that OpenAI is drawing a clear line on what “hallucination” means, giving examples, and showing practical steps for addressing them. The post isn’t groundbreaking, but it helps set the tone for how we talk about hallucinations.

What bothers me about the hot takes is the claim that “all models do is hallucinate.” That collapses the distinction entirely. Yes, models are just predicting the next token—but that doesn’t mean all outputs are hallucinations. If that were true, it’d be pointless to even have the term, and it would ignore the fact that some models hallucinate much less than others because of scale, training, and fine-tuning.

That’s why a careful definition matters: not every generation is a hallucination, and having good definitions let us talk about the real differences.

replies(9): >>45149764 #>>45151155 #>>45152383 #>>45154710 #>>45155176 #>>45156170 #>>45157195 #>>45166309 #>>45184453 #
1. player1234 ◴[] No.45166309[source]
Correct, it is a useless term with the goal to gaslight and antropmorphise a system that predicts the next token.