←back to thread

416 points floverfelt | 3 comments | | HN request time: 0.205s | source
Show context
oo0shiny ◴[] No.45057794[source]
> My former colleague Rebecca Parsons, has been saying for a long time that hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

What a great way of framing it. I've been trying to explain this to people, but this is a succinct version of what I was stumbling to convey.

replies(5): >>45060348 #>>45060455 #>>45061299 #>>45061334 #>>45061655 #
1. tugberkk ◴[] No.45061299[source]
Yes, can't remember who said it but LLM's always hallucinate, it is just that they are 90 something percent right.
replies(2): >>45061320 #>>45061838 #
2. OtomotO ◴[] No.45061320[source]
Which totally depends on your domain and subdomain.

E.g. Programming in JS or Python: good enough

Programming in Rust: I can scrap over 50% of the code because it will

a) not compile at all (I see this while the "AI" types)

b) not meet the requirements at all

3. ljm ◴[] No.45061838[source]
If I was to drop acid and hallucinate an alien invasion, and then suddenly a xenomorph runs loose around the city while I’m tripping balls, does being right in that one instance mean the rest of my reality is also a hallucination?

Because it seems the point being made multiple times that a perceptual error isn’t a key component of hallucinating, the whole thing is instead just a convincing illusion that could theoretically apply to all perception, not just the psychoactively augmented kind.