←back to thread

504 points puttycat | 1 comments | | HN request time: 0.399s | source
Show context
jameshart ◴[] No.46182056[source]
Is the baseline assumption of this work that an erroneous citation is LLM hallucinated?

Did they run the checker across a body of papers before LLMs were available and verify that there were no citations in peer reviewed papers that got authors or titles wrong?

replies(5): >>46182229 #>>46182238 #>>46182245 #>>46182375 #>>46186305 #
1. currymj ◴[] No.46186305[source]
the papers themselves are publicly available online too. Most of the ones I spot-checked give the extremely strong impression of AI generation.

not just some hallucinated citations, and not just the writing. in many cases the actual purported research "ideas" seem to be plausible nonsense.

To get a feel for it, you can take some of the topics they write about and ask your favorite LLM to generate a paper. Maybe even throw "Deep Research" mode at it. Perhaps tell it to put it in ICLR latex format. It will look a lot like these.