Nice.
Salience (https://en.wikipedia.org/wiki/Salience_(neuroscience)), "the property by which some thing stands out", is something LLMs have trouble with. Probably because they're trained on human text, which ranges from accurate descriptions of reality to nonsense.
[1] https://huggingface.co/docs/smolagents/conceptual_guides/int...
So the only real difference between "perception" and a "hallucination" is whether it is supported by physical reality.
I can recognize my own meta cognition there. My model of reality course corrects the information feed interpretation on the fly. Optical illusions feel very similar whereby the inner reality model clashes with the observed.
For general ai, it needs a world model that can be tested against and surprise is noted and models are updated. Looping llm output with test cases is a crude approximation of that world model.