I agree with your first paragraph, but not your second. Models can still hallucinate when temperature is set to zero (aka when we always choose the highest probability token from the model's output token distribution).
In my mind, hallucination is when some aspect of the model's response should be consistent with reality but is not, and the reality-inconsistent information is not directly attributable or deducible from (mis)information in the pre-training corpus.
While hallucination can be triggered by setting the temperature high, it can also be the result of many possible deficiencies in model pre- and post- training that result in the model outputting bad token probability distributions.