←back to thread

277 points simianwords | 2 comments | | HN request time: 0.43s | source
Show context
robotcapital ◴[] No.45154570[source]
It’s interesting that most of the comments here read like projections of folk-psych intuitions. LLMs hallucinate because they “think” wrong, or lack self-awareness, or should just refuse. But none of that reflects how these systems actually work. This is a paper from a team working at the state of the art, trying to explain one of the biggest open challenges in LLMs, and instead of engaging with the mechanisms and evidence, we’re rehashing gut-level takes about what they must be doing. Fascinating.
replies(4): >>45154689 #>>45155695 #>>45155909 #>>45155983 #
1. renewiltord ◴[] No.45155695[source]
It's always the most low-brow takes as well. But the majority of Hacker News commentators "hallucinate" most of their comments in the first place, since they simply regurgitate the top answers based on broad bucketing of subject matter.

Facebook? "Steal your data"

Google? "Kill your favourite feature"

Apple? "App Store is enemy of the people"

OpenAI? "More like ClosedAI amirite"

replies(1): >>45169092 #
2. player1234 ◴[] No.45169092[source]
About the same way you regurgitate Sammy's cum in your mouth.