←back to thread

277 points simianwords | 1 comments | | HN request time: 0.221s | source
Show context
amelius ◴[] No.45149170[source]
They hallucinate because it's an ill-defined problem with two conflicting usecases:

1. If I tell it the first two lines of a story, I want the LLM to complete the story. This requires hallucination, because it has to make up things. The story has to be original.

2. If I ask it a question, I want it to reply with facts. It should not make up stuff.

LMs were originally designed for (1) because researchers thought that (2) was out of reach. But it turned out that, without any fundamental changes, LMs could do a little bit of (2) and since that discovery things have improved but not to the point that hallucination disappeared or was under control.

replies(10): >>45149354 #>>45149390 #>>45149708 #>>45149889 #>>45149897 #>>45152136 #>>45152227 #>>45152405 #>>45152996 #>>45156457 #
1. furyofantares ◴[] No.45152405[source]
Wanting it to pick between those modes based on what you asked for is not remotely ill-defined.

But even if we restricted ourselves to the case of factual queries, the article discusses why training in a certain way would still produce hallucinations, and how to change the training method to reduce this.

Like many of the other responses here, your dismissal doesn't really address any of the content of the article, just the title.