←back to thread

3337 points keepamovin | 1 comments | | HN request time: 0.193s | source
Show context
jll29 ◴[] No.46216933[source]
AI professor here. I know this page is a joke, but in the interest of accuracy, a terminological comment: we don't call it a "hallucination" if a model complies exactly with what a prompt asked for and produces a prediction, exactly as requested.

Rater, "hallucinations" are spurious replacements of factual knowledge with fictional material caused by the use of statistical process (the pseudo random number generator used with the "temperature" parameter of neural transformers): token prediction without meaning representation.

[typo fixed]

replies(12): >>46217033 #>>46217061 #>>46217166 #>>46217410 #>>46217456 #>>46217758 #>>46218070 #>>46218282 #>>46218393 #>>46218588 #>>46219018 #>>46219935 #
ayewo ◴[] No.46217410[source]
Terminology-wise, does this read like a better title instead?:

Show HN: Gemini Pro 3 generates the HN front page 10 years from now

replies(2): >>46218366 #>>46219720 #
1. locknitpicker ◴[] No.46219720[source]
> Terminology-wise, does this read like a better title instead?:

Generates does not convey any info on the nature of the process used to create the output. In this context, extrapolates or predicts or explores sound more suitable.

But nitpicking over these words is pointless and represents going off on a tangent. The use of the term "hallucination" reffers to the specific mechanism used to generate this type of output. Just like prompting a model to transcode a document and thus generating an output that doesn't match any established format.