It just happens that a lot of that output is useful/corresponding with the real world.
To say "it only hallucinates sometimes" is burying the lede and confusing for people who are trying to use it
Q: How do I stop Hallucinations? A: useless question, because you can't. It is the mechanism that gives you what you want
It does however make the point that hallucinations are not some special glitch which is distinct from the normal operation of the model. It's just outputting plausible text, which is right often enough to be useful.
Adding in some extra sauce to help the model evaluate the correctness of answers, or when it doesn't know enough to give a good answer, is obviously one way to mitigate this otherwise innate behaviour.
I think that thinking of all LLM output as 'hallucinations' while making use of the fact that these hallucinations are often true for the real world is a good mindset, especially for nontechnical people, who might otherwise not realise.