←back to thread

416 points floverfelt | 1 comments | | HN request time: 0.202s | source
Show context
sebnukem2 ◴[] No.45056066[source]
> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

Nice.

replies(7): >>45056284 #>>45056352 #>>45057115 #>>45057234 #>>45057503 #>>45057942 #>>45061686 #
1. keeda ◴[] No.45057942[source]
I've prefered to riff off of the other quote:

"All (large language) model outputs are hallucinations, but some are useful."

Some astonishingly large proportion of them, actually. Hence the AI boom.