←back to thread

416 points floverfelt | 1 comments | | HN request time: 0.203s | source
Show context
oo0shiny ◴[] No.45057794[source]
> My former colleague Rebecca Parsons, has been saying for a long time that hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

What a great way of framing it. I've been trying to explain this to people, but this is a succinct version of what I was stumbling to convey.

replies(5): >>45060348 #>>45060455 #>>45061299 #>>45061334 #>>45061655 #
aitchnyu ◴[] No.45061334[source]
All models are wrong, some are merely useful - 1976/1933/earlier adage.
replies(2): >>45061363 #>>45061522 #
1. pjmorris ◴[] No.45061363[source]
Generally attributed to George Box