←back to thread

416 points floverfelt | 1 comments | | HN request time: 0s | source
Show context
sebnukem2 ◴[] No.45056066[source]
> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

Nice.

replies(7): >>45056284 #>>45056352 #>>45057115 #>>45057234 #>>45057503 #>>45057942 #>>45061686 #
tptacek ◴[] No.45056284[source]
In that framing, you can look at an agent as simply a filter on those hallucinations.
replies(4): >>45056346 #>>45056552 #>>45056728 #>>45058056 #
1. armchairhacker ◴[] No.45056728[source]
This vaguely relates to a theory about human thought: that our subconscious constantly comes up with random ideas, then filters the unreasonable ones, but in people with delusions (e.g. schizophrenia) the filter is broken.

Salience (https://en.wikipedia.org/wiki/Salience_(neuroscience)), "the property by which some thing stands out", is something LLMs have trouble with. Probably because they're trained on human text, which ranges from accurate descriptions of reality to nonsense.