←back to thread

416 points floverfelt | 3 comments | | HN request time: 0.427s | source
Show context
oo0shiny ◴[] No.45057794[source]
> My former colleague Rebecca Parsons, has been saying for a long time that hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

What a great way of framing it. I've been trying to explain this to people, but this is a succinct version of what I was stumbling to convey.

replies(5): >>45060348 #>>45060455 #>>45061299 #>>45061334 #>>45061655 #
1. aitchnyu ◴[] No.45061334[source]
All models are wrong, some are merely useful - 1976/1933/earlier adage.
replies(2): >>45061363 #>>45061522 #
2. pjmorris ◴[] No.45061363[source]
Generally attributed to George Box
3. lagrange77 ◴[] No.45061522[source]
Right, all models are inherently wrong. It's up to the user know about its limits / uncertainty.

But i think this 'being wrong' is kind of confusing when talking about LLMs (in contrast to systems/scientific modelling). In what they model (language), the current LLMs are really good and acurate, except for example the occasional chinese character in the middle of a sentence.

But what we mean by LLMs 'being wrong' most of the time is being factually wrong in answering a question, that is expressed as language. That's a layer on top of what the model is designed to model.

EDITS:

So saying 'the model is wrong' when it's factually wrong above the language level isn't fair.

I guess this is essentially the same thought as 'all they do is hallucinate'.