←back to thread

277 points simianwords | 1 comments | | HN request time: 0.204s | source
Show context
yreg ◴[] No.45156255[source]
Maybe it goes against the definition but I like saying that _all_ output is a hallucination, when explaining LLMs.

It just happens that a lot of that output is useful/corresponding with the real world.

replies(1): >>45156423 #
kelnos ◴[] No.45156423[source]
Well yes, it goes against the accepted definition. And if all output is hallucination, then it's not really a useful way to describe anything, so why bother?
replies(3): >>45156460 #>>45156875 #>>45169827 #
1. MattPalmer1086 ◴[] No.45156875[source]
I agree that saying everything is a hallucination doesn't help to narrow down on possible solutions.

It does however make the point that hallucinations are not some special glitch which is distinct from the normal operation of the model. It's just outputting plausible text, which is right often enough to be useful.

Adding in some extra sauce to help the model evaluate the correctness of answers, or when it doesn't know enough to give a good answer, is obviously one way to mitigate this otherwise innate behaviour.