Maybe it goes against the definition but I like saying that _all_ output is a hallucination, when explaining LLMs.
It just happens that a lot of that output is useful/corresponding with the real world.
replies(1):
It just happens that a lot of that output is useful/corresponding with the real world.
It does however make the point that hallucinations are not some special glitch which is distinct from the normal operation of the model. It's just outputting plausible text, which is right often enough to be useful.
Adding in some extra sauce to help the model evaluate the correctness of answers, or when it doesn't know enough to give a good answer, is obviously one way to mitigate this otherwise innate behaviour.