> "Hallucination" has always seemed like a misnomer to me anyway considering LLMs don't know anything. They just impressively get things right enough to be useful assuming you audit the output.
If you pick up a dictionary and review the definition of "hallucination", you'll see something in the lines of "something that you see, hear, feel or smell that does not exist"
https://dictionary.cambridge.org/dictionary/english/hallucin...
Your own personal definition arguably reinforces the very definition of hallucination. Models don't get things right. Why? Because their output contrasts with content covered by their corpus, thus outputting things that don't exist or were referred in it and outright contrast with factual content.
> If anything, I think all of their output should be called a hallucination.
No. Only the ones that contrast with reality, namely factual information.
Hence the term hallucination.