Maybe it goes against the definition but I like saying that _all_ output is a hallucination, when explaining LLMs.
It just happens that a lot of that output is useful/corresponding with the real world.
replies(1):
It just happens that a lot of that output is useful/corresponding with the real world.
I think that thinking of all LLM output as 'hallucinations' while making use of the fact that these hallucinations are often true for the real world is a good mindset, especially for nontechnical people, who might otherwise not realise.