←back to thread

277 points simianwords | 2 comments | | HN request time: 0s | source
Show context
robotcapital ◴[] No.45154570[source]
It’s interesting that most of the comments here read like projections of folk-psych intuitions. LLMs hallucinate because they “think” wrong, or lack self-awareness, or should just refuse. But none of that reflects how these systems actually work. This is a paper from a team working at the state of the art, trying to explain one of the biggest open challenges in LLMs, and instead of engaging with the mechanisms and evidence, we’re rehashing gut-level takes about what they must be doing. Fascinating.
replies(4): >>45154689 #>>45155695 #>>45155909 #>>45155983 #
zahlman ◴[] No.45154689[source]
Calling it a "hallucination" is anthropomorphizing too much in the first place, so....
replies(2): >>45154752 #>>45155168 #
1. robotcapital ◴[] No.45154752[source]
Right, that’s kind of my point. We call it “hallucination” because we don’t understand it, but need a shorthand to convey the concept. Here’s a paper trying to demystify it so maybe we don’t need to make up anthropomorphized theories.
replies(1): >>45169072 #
2. player1234 ◴[] No.45169072[source]
We do nothing, they call it hallucination to decieve.

Altman simping all over.