←back to thread

277 points simianwords | 1 comments | | HN request time: 0.197s | source
Show context
robotcapital ◴[] No.45154570[source]
It’s interesting that most of the comments here read like projections of folk-psych intuitions. LLMs hallucinate because they “think” wrong, or lack self-awareness, or should just refuse. But none of that reflects how these systems actually work. This is a paper from a team working at the state of the art, trying to explain one of the biggest open challenges in LLMs, and instead of engaging with the mechanisms and evidence, we’re rehashing gut-level takes about what they must be doing. Fascinating.
replies(4): >>45154689 #>>45155695 #>>45155909 #>>45155983 #
1. KajMagnus ◴[] No.45155909[source]
Yes, many _humans_ here hallucinate, sort of.

They apparently didn't read the article, or didn't understand i, or disregard from it. (Why, why, why?)

And they fail to realize that they don't know what they are talking about, nevertheless keep talking. Similar to an over confident AI.

On a discussion about hallucinating AIs, the humans start hallucinating.