←back to thread

416 points floverfelt | 1 comments | | HN request time: 0.199s | source
Show context
oo0shiny ◴[] No.45057794[source]
> My former colleague Rebecca Parsons, has been saying for a long time that hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

What a great way of framing it. I've been trying to explain this to people, but this is a succinct version of what I was stumbling to convey.

replies(5): >>45060348 #>>45060455 #>>45061299 #>>45061334 #>>45061655 #
jstrieb ◴[] No.45060348[source]
I have been explaining this to friends and family by comparing LLMs to actors. They deliver a performance in-character, and are only factual if it happens to make the performance better.

https://jstrieb.github.io/posts/llm-thespians/

replies(4): >>45061465 #>>45061893 #>>45061997 #>>45063090 #
red75prime ◴[] No.45061893[source]
The analogy goes down the drain when a criterion for good performance is being objectively right. Like with Reinforcement Learning from Verifiable Rewards.
replies(3): >>45062885 #>>45064907 #>>45065336 #
1. jimbokun ◴[] No.45065336[source]
But being "objectively right" is not the goal of an actor.

Thus, why it's a good metaphor for the behavior of LLMs.