←back to thread

277 points simianwords | 1 comments | | HN request time: 0.32s | source
Show context
fumeux_fume ◴[] No.45149658[source]
I like that OpenAI is drawing a clear line on what “hallucination” means, giving examples, and showing practical steps for addressing them. The post isn’t groundbreaking, but it helps set the tone for how we talk about hallucinations.

What bothers me about the hot takes is the claim that “all models do is hallucinate.” That collapses the distinction entirely. Yes, models are just predicting the next token—but that doesn’t mean all outputs are hallucinations. If that were true, it’d be pointless to even have the term, and it would ignore the fact that some models hallucinate much less than others because of scale, training, and fine-tuning.

That’s why a careful definition matters: not every generation is a hallucination, and having good definitions let us talk about the real differences.

replies(9): >>45149764 #>>45151155 #>>45152383 #>>45154710 #>>45155176 #>>45156170 #>>45157195 #>>45166309 #>>45184453 #
1. ttctciyf ◴[] No.45156170[source]
"Hallucination" is a euphemism at best, and the implication it carries that LLMs correctly perceive (meaning) when they are not hallucinating is fallacious and disinforming.

The reification of counterfactual outputs which are otherwise indistinguishable from the remainder of LLM production etiologically is a better candidate for the label "hallucination" IMO.