←back to thread

504 points puttycat | 1 comments | | HN request time: 0s | source
Show context
Isamu ◴[] No.46182271[source]
Someone commented here that hallucination is what LLMs do, it’s the designed mode of selecting statistically relevant model data that was built on the training set and then mashing it up for an output. The outcome is something that statistically resembles a real citation.

Creating a real citation is totally doable by a machine though, it is just selecting relevant text, looking up the title, authors, pages etc and putting that in canonical form. It’s just that LLMs are not currently doing the work we ask for, but instead something similar in form that may be good enough.

replies(2): >>46182312 #>>46191230 #
1. ◴[] No.46182312[source]