←back to thread

277 points simianwords | 1 comments | | HN request time: 0.586s | source
Show context
fumeux_fume ◴[] No.45149658[source]
I like that OpenAI is drawing a clear line on what “hallucination” means, giving examples, and showing practical steps for addressing them. The post isn’t groundbreaking, but it helps set the tone for how we talk about hallucinations.

What bothers me about the hot takes is the claim that “all models do is hallucinate.” That collapses the distinction entirely. Yes, models are just predicting the next token—but that doesn’t mean all outputs are hallucinations. If that were true, it’d be pointless to even have the term, and it would ignore the fact that some models hallucinate much less than others because of scale, training, and fine-tuning.

That’s why a careful definition matters: not every generation is a hallucination, and having good definitions let us talk about the real differences.

replies(9): >>45149764 #>>45151155 #>>45152383 #>>45154710 #>>45155176 #>>45156170 #>>45157195 #>>45166309 #>>45184453 #
1. catlifeonmars ◴[] No.45152383[source]
So there are two angles to this:

- From the perspective of LLM research/engineering, saying all LLM generation is hallucination is not particularly useful. It’s meaningless for the problem space.

- From the perspective of AI research/engineering in general (not LLM specific) it can be useful to consider architectures that do not rely on hallucination in the second sense.