Nice.
This is different from human hallucinations where it makes something up because of something wrong with the mind rather than some underlying issue with the brain's architecture.
Salience (https://en.wikipedia.org/wiki/Salience_(neuroscience)), "the property by which some thing stands out", is something LLMs have trouble with. Probably because they're trained on human text, which ranges from accurate descriptions of reality to nonsense.
> In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation,[1] or delusion)[2] is a response generated by AI that contains false or misleading information presented as fact.[3][4]
You say
> This is different from human hallucinations where it makes something up because of something wrong with the mind rather than some underlying issue with the brain's architecture.
For consistency you might as well say everything the human mind does is hallucination. It's the same sort of claim. This claim at least has the virtue of being taken seriously by people like Descartes.
https://en.wikipedia.org/wiki/Hallucination_(artificial_inte...
[1] https://huggingface.co/docs/smolagents/conceptual_guides/int...
But much more than an arithmetic engine, the current crop of AI needs an epistemic engine, something that would help follow logic and avoid contradictions, to determine what is a well-established fact, and what is a shaky conjecture. Then we might start trusting the AI.
It implies that some parts of the output aren’t hallucinations, when the reality is that none of it has any thought behind it.
So, we VALUE creativity, we claim that it helps us solve problems, improves our understanding of the universe, etc.
BUT people with some mental illnesses, their brain is so creative that they lose the understanding of where reality is and where their imagination/creativity takes over.
eg. Hearing voices? That's the brain conjuring up a voice - auditory and visual hallucinations are the easy example.
But it goes further, depression is where people's brains create scenarios where there is no hope, and there's no escape. Anxiety too, the brain is conjuring up fears of what's to come
It's possible LLMs are lying but my guess is that they really just can't tell the difference.
So the only real difference between "perception" and a "hallucination" is whether it is supported by physical reality.
To me this is the most bizarre part. Have we ever had a technology deployed at this scale without a true understanding of its inner workings?
My fear is that the general public perception of AI will be damaged since for most LLMs = AI.
The idea we don't is tabloid journalism, it's simply because the output is (usually) randomised - taken to mean, by those who lack the technical chops, that programmers "don't know how it works" because the output is indeterministic.
This is not withstanding we absolutely can repeat the output by using not randomisation (temperature 0).
I can recognize my own meta cognition there. My model of reality course corrects the information feed interpretation on the fly. Optical illusions feel very similar whereby the inner reality model clashes with the observed.
For general ai, it needs a world model that can be tested against and surprise is noted and models are updated. Looping llm output with test cases is a crude approximation of that world model.