←back to thread

416 points floverfelt | 1 comments | | HN request time: 0s | source
Show context
Scubabear68 ◴[] No.45056993[source]
"Hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature".

I used to avidly read all his stuff, and I remember 20ish years ago he decided to rename Inversion of Control to Dependency Injection. In doing so, and his accompany blog, he showed he didn't actually understand it at a deep level (and hence his poor renaming).

This feels similar. I know what he's trying to say, but he's just wrong. He's trying to say the LLM is hallucinating everything, but Fowler is missing is that Hallucination in LLM terms refers to a very specific negative behavior.

replies(3): >>45057036 #>>45057288 #>>45057604 #
1. anthem2025 ◴[] No.45057288[source]
No, it’s just an attempt to pretend wrong outputs are some special case when really they aren’t. They aren’t imagine something that doesn’t exist they are just running the same process they do for everything else and it just didn’t work.

If you disagree then I would ask what exactly is the “specific behaviour” you’re talking about?