←back to thread

3337 points keepamovin | 1 comments | | HN request time: 0.192s | source
Show context
jll29 ◴[] No.46216933[source]
AI professor here. I know this page is a joke, but in the interest of accuracy, a terminological comment: we don't call it a "hallucination" if a model complies exactly with what a prompt asked for and produces a prediction, exactly as requested.

Rater, "hallucinations" are spurious replacements of factual knowledge with fictional material caused by the use of statistical process (the pseudo random number generator used with the "temperature" parameter of neural transformers): token prediction without meaning representation.

[typo fixed]

replies(12): >>46217033 #>>46217061 #>>46217166 #>>46217410 #>>46217456 #>>46217758 #>>46218070 #>>46218282 #>>46218393 #>>46218588 #>>46219018 #>>46219935 #
hbn ◴[] No.46218070[source]
"Hallucination" has always seemed like a misnomer to me anyway considering LLMs don't know anything. They just impressively get things right enough to be useful assuming you audit the output.

If anything, I think all of their output should be called a hallucination.

replies(3): >>46218173 #>>46218539 #>>46219634 #
Workaccount2 ◴[] No.46218173[source]
We don't know if anything knows anything because we don't know what knowing is.
replies(2): >>46218427 #>>46218698 #
stingraycharles ◴[] No.46218698[source]
This is just something that sounds profound but really isn’t.

Knowing is actually the easiest part to define and explain. Intelligence / understanding is much more difficult to define.

replies(1): >>46219186 #
1. shagie ◴[] No.46219186[source]
I took a semester long 500 level class back in college on the theory of knowledge. It is not easy to define - the entire branch of epistemology in philosophy deals with that question.

... To that end, I'd love to be able to revisit my classes from back then (computer science, philosophy (two classes from a double major), and a smattering of linguistics) with the world state of today's technologies.