←back to thread

416 points floverfelt | 1 comments | | HN request time: 0.219s | source
Show context
sebnukem2 ◴[] No.45056066[source]
> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

Nice.

replies(7): >>45056284 #>>45056352 #>>45057115 #>>45057234 #>>45057503 #>>45057942 #>>45061686 #
tptacek ◴[] No.45056284[source]
In that framing, you can look at an agent as simply a filter on those hallucinations.
replies(4): >>45056346 #>>45056552 #>>45056728 #>>45058056 #
keeda ◴[] No.45058056[source]
More of a error-correcting feedback loop rather than a filter, really. Which is very much what we do as humans, apparently. One recent theory of neuroscience that is becoming influential is Predictive Processing --https://en.wikipedia.org/wiki/Predictive_coding -- this postulates that we also constantly generate a "mental model" of our environment (a literal "prediction") and use sensory inputs to correct and update it.

So the only real difference between "perception" and a "hallucination" is whether it is supported by physical reality.

replies(3): >>45059794 #>>45060421 #>>45060452 #
1. jonoc ◴[] No.45059794[source]
thats a fascinating way to put it