←back to thread

277 points simianwords | 1 comments | | HN request time: 0.204s | source
Show context
fumeux_fume ◴[] No.45149658[source]
I like that OpenAI is drawing a clear line on what “hallucination” means, giving examples, and showing practical steps for addressing them. The post isn’t groundbreaking, but it helps set the tone for how we talk about hallucinations.

What bothers me about the hot takes is the claim that “all models do is hallucinate.” That collapses the distinction entirely. Yes, models are just predicting the next token—but that doesn’t mean all outputs are hallucinations. If that were true, it’d be pointless to even have the term, and it would ignore the fact that some models hallucinate much less than others because of scale, training, and fine-tuning.

That’s why a careful definition matters: not every generation is a hallucination, and having good definitions let us talk about the real differences.

replies(9): >>45149764 #>>45151155 #>>45152383 #>>45154710 #>>45155176 #>>45156170 #>>45157195 #>>45166309 #>>45184453 #
1. hodgehog11 ◴[] No.45149764[source]
Absolutely in agreement here. This same statement should also be applied to the words "know", "understand", and "conceptualize". "Generalize", "memorize" and "out-of-distribution" should also be cautiously considered when working with systems trained on incomprehensibly large datasets.

We need to establish proper definitions and models for these things before we can begin to argue about them. Otherwise we're just wasting time.