←back to thread

504 points puttycat | 1 comments | | HN request time: 0.327s | source
Show context
jameshart ◴[] No.46182056[source]
Is the baseline assumption of this work that an erroneous citation is LLM hallucinated?

Did they run the checker across a body of papers before LLMs were available and verify that there were no citations in peer reviewed papers that got authors or titles wrong?

replies(5): >>46182229 #>>46182238 #>>46182245 #>>46182375 #>>46186305 #
llm_nerd ◴[] No.46182238[source]
People will commonly hold LLMs as unusable because they make mistakes. So do people. Books have errors. Papers have errors. People have flawed knowledge, often degraded through a conceptual game of telephone.

Exactly as you said, do precisely this to pre-LLM works. There will be an enormous number of errors with utter certainty.

People keep imperfect notes. People are lazy. People sometimes even fabricate. None of this needed LLMs to happen.

replies(4): >>46182279 #>>46182296 #>>46182511 #>>46184858 #
pmontra ◴[] No.46182511[source]
Fabricated citations are not errors.

A pre LLM paper with fabricated citations would demonstrate will to cheat by the author.

A post LLM paper with fabricated citations: same thing and if the authors attempt to defend themselves with something like, we trusted the AI, they are sloppy, probably cheaters and not very good at it.

replies(2): >>46182732 #>>46183124 #
1. mapmeld ◴[] No.46183124[source]
Further, if I use AI-written citations to back some claim or fact, what are the actual claims or facts based on? These started happening in law because someone writes the text and then wishes there was a source that was relevant and actually supportive of their claim. But if someone puts in the labor to check your real/extant sources, there's nothing backing it (e.g. MAHA report).