Most active commenters
  • throwawaymaths(4)

←back to thread

277 points simianwords | 11 comments | | HN request time: 0s | source | bottom
Show context
amelius ◴[] No.45149170[source]
They hallucinate because it's an ill-defined problem with two conflicting usecases:

1. If I tell it the first two lines of a story, I want the LLM to complete the story. This requires hallucination, because it has to make up things. The story has to be original.

2. If I ask it a question, I want it to reply with facts. It should not make up stuff.

LMs were originally designed for (1) because researchers thought that (2) was out of reach. But it turned out that, without any fundamental changes, LMs could do a little bit of (2) and since that discovery things have improved but not to the point that hallucination disappeared or was under control.

replies(10): >>45149354 #>>45149390 #>>45149708 #>>45149889 #>>45149897 #>>45152136 #>>45152227 #>>45152405 #>>45152996 #>>45156457 #
wavemode ◴[] No.45149354[source]
Indeed - as Rebecca Parsons puts it, all an LLM knows how to do is hallucinate. Users just tend to find some of these hallucinations useful, and some not.
replies(5): >>45149571 #>>45149593 #>>45149888 #>>45149966 #>>45152431 #
1. throwawaymaths ◴[] No.45149571[source]
that's wrong. there is probably a categorical difference between making something up due to some sort of inferential induction from the kv cache context under the pressure of producing a token -- any token -- and actually looking something up and producing a token.

so if you ask, "what is the capital of colorado" and it answers "denver" calling it a Hallucination is nihilistic nonsense that paves over actually stopping to try and understand important dynamics happening in the llm matrices

replies(3): >>45149984 #>>45152027 #>>45152539 #
2. mannykannot ◴[] No.45149984[source]
There is a way to state Parson's point which avoids this issue: hallucinations are just as much a consequence of the LLM working as designed as are correct statements.
replies(1): >>45151094 #
3. throwawaymaths ◴[] No.45151094[source]
fine. which part is the problem?
replies(2): >>45152170 #>>45164325 #
4. saghm ◴[] No.45152027[source]
> so if you ask, "what is the capital of colorado" and it answers "denver" calling it a Hallucination is nihilistic nonsense that paves over actually stopping to try and understand important dynamics happening in the llm matrices

On the other hand, calling it anything other than a hallucination misrepresents the idea of truth as being something that these models have any ability to differentiate between their outputs based on whether they accurately reflect reality by conflating a fundamentally unsolved problem as an engineering tradeoff.

replies(1): >>45152907 #
5. johnnyanmac ◴[] No.45152170{3}[source]
The part where it can't admit situations where there's not enough data/training to admit it doesn't know.

I'm a bit surprised no one talks about this factor. It's like talking to a giant narcissist who can Google really fast but not understand what it reads. The ability to admit ignorance is a major factor of credibility, because none of us know everything all at once.

replies(1): >>45153022 #
6. littlestymaar ◴[] No.45152539[source]
> that's wrong.

Why would anyone respond with so little nuance?

> a Hallucination

Oh, so your shift key wasn't broken all the time, then why aren't you using it in your sentences?

7. ComplexSystems ◴[] No.45152907[source]
It isn't a hallucination because that isn't how the term is defined. The term "hallucination" refers, very specifically, to "plausible but false statements generated by language models."

At the end of the day, the goal is to train models that are able to differentiate between true and false statements, at least to a much better degree than they can now, and the linked article seems to have some very interesting suggestions about how to get them to do that.

replies(2): >>45153078 #>>45166613 #
8. throwawaymaths ◴[] No.45153022{4}[source]
yeah sorry i mean which part of the architecture. "working as designed"
9. throwawaymaths ◴[] No.45153078{3}[source]
your point is good and taken but i would amend slightly -- i dont think that "absolute truth" is itself a goal, but rather "how aware is it that it doesn't know something". this negative space is frustratingly hard to capture in the llm architecture (though almost certainly there are signs -- if you had direct access to the logits array, for example)
10. mannykannot ◴[] No.45164325{3}[source]
I suppose you are aware that, for many uses of LLMs, the propensity for hallucinating is a problem (especially if this is not properly taken into account by the people hoping to use these LLMs), but this then leaves me puzzled about what you are asking here.
11. player1234 ◴[] No.45166613{3}[source]
Why use a word that you have to redefine the meaning of? The answer is to deceive.