←back to thread

416 points floverfelt | 2 comments | | HN request time: 0.001s | source
Show context
Scubabear68 ◴[] No.45056993[source]
"Hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature".

I used to avidly read all his stuff, and I remember 20ish years ago he decided to rename Inversion of Control to Dependency Injection. In doing so, and his accompany blog, he showed he didn't actually understand it at a deep level (and hence his poor renaming).

This feels similar. I know what he's trying to say, but he's just wrong. He's trying to say the LLM is hallucinating everything, but Fowler is missing is that Hallucination in LLM terms refers to a very specific negative behavior.

replies(3): >>45057036 #>>45057288 #>>45057604 #
ares623 ◴[] No.45057036[source]
As far as an LLM is concerned, there is no difference between "negative" hallucination and a positive one. It's all just tokens and embeddings to it.

Positive hallucinations are more likely to happen nowadays, thanks to all the effort going into these systems.

replies(1): >>45057656 #
1. Scubabear68 ◴[] No.45057656[source]
This basically ruins the term “hallucination” and makes it meaningless, when the term actually describes a real phenomenon.
replies(1): >>45057878 #
2. ares623 ◴[] No.45057878[source]
That's the point. It is meaningless. When it first coined, there were already detractors to the term, that it is an incorrect description of the phenomenon. But it stuck.