> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.
Nice.
replies(7):
Nice.
So the only real difference between "perception" and a "hallucination" is whether it is supported by physical reality.
I can recognize my own meta cognition there. My model of reality course corrects the information feed interpretation on the fly. Optical illusions feel very similar whereby the inner reality model clashes with the observed.
For general ai, it needs a world model that can be tested against and surprise is noted and models are updated. Looping llm output with test cases is a crude approximation of that world model.