←back to thread

416 points floverfelt | 1 comments | | HN request time: 0.001s | source
Show context
sebnukem2 ◴[] No.45056066[source]
> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

Nice.

replies(7): >>45056284 #>>45056352 #>>45057115 #>>45057234 #>>45057503 #>>45057942 #>>45061686 #
ninetyninenine ◴[] No.45056352[source]
Nah I don't agree with this characterization. The problem is, the majority of those hallucinations are true. What was said would make more sense if the majority of the responses were, in fact, false, but this is not the case.
replies(2): >>45056515 #>>45056567 #
xmprt ◴[] No.45056567[source]
I think you're both correct but have different definitions of hallucinations. You're judging it as a hallucination based on the veracity of the output. Whereas Fowler is judging it based on the method by which the output is achieved. By that judgement, everything is a hallucination because the user cannot differentiate between when the LLM is telling the truth and isn't.

This is different from human hallucinations where it makes something up because of something wrong with the mind rather than some underlying issue with the brain's architecture.

replies(2): >>45056809 #>>45057305 #
ants_everywhere ◴[] No.45056809{3}[source]
an LLM hallucination is defined by its truth

> In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation,[1] or delusion)[2] is a response generated by AI that contains false or misleading information presented as fact.[3][4]

You say

> This is different from human hallucinations where it makes something up because of something wrong with the mind rather than some underlying issue with the brain's architecture.

For consistency you might as well say everything the human mind does is hallucination. It's the same sort of claim. This claim at least has the virtue of being taken seriously by people like Descartes.

https://en.wikipedia.org/wiki/Hallucination_(artificial_inte...

replies(1): >>45056842 #
1. ninetyninenine ◴[] No.45056842{4}[source]
Even the colloquial term outside of AI is characterized by the veracity of the output.