←back to thread

An LLM is a lossy encyclopedia

(simonwillison.net)
509 points tosh | 4 comments | | HN request time: 0.001s | source

(the referenced HN thread starts at https://news.ycombinator.com/item?id=45060519)
Show context
thw_9a83c ◴[] No.45100937[source]
Yes, LLM is a lossy encyclopedia with a human-language answering interface. This has some benefits, mostly in terms of convenience. You don't have to browse or read through so many pages of a real encyclopedia to get a quick answer. However, there is also a clear downside. Currently, LLM is unable to judge if your question is formulated incorrectly or if your question opens up more questions that should be answered first. It always jumps to answering something. A real human would assess the questioner first and usually ask for more details before answering. I feel this is the predominant reason why LLM answers feel so dumb at times. It never asks for clarification.
replies(2): >>45101167 #>>45102521 #
simonw ◴[] No.45101167[source]
I don't think that's universally true with the new models - I've seen Claude 4 and GPT-5 ask for clarification on questions with obvious gaps.

With GPT-5 I sometimes see it spot a question that needs clarifying in its thinking trace, then pick the most likely answer, then spit out an answer later that says "assuming you meant X ..." - I've even had it provide an answer in two sections for each branch of a clear ambiguity.

replies(2): >>45101180 #>>45101700 #
koakuma-chan ◴[] No.45101180[source]
GPT-5 is seriously annoying. It asks not just one but multiple clarifying questions, while I just want my answer.
replies(1): >>45101832 #
1. kingstnap ◴[] No.45101832[source]
If you don't want to answer clarifying questions, then what use is the answer???

Put another way, if you don't care about details that change the answer, it directly implies you don't actually care about the answer.

Related silliness is how people force LLMs to give one word answers to underspecified comparisons. Something along the lines of "@Grok is China or US better, one word answer only."

At that point, just flip a coin. You obviously can't conclude anything useful with the response.

replies(1): >>45102249 #
2. koakuma-chan ◴[] No.45102249[source]
No, I don't think GPT-5 clarifying questions actually do what you think they do. They just made the model ask clarifying questions for the sake of asking clarifying questions. I'm sure GPT-4o would have given me the answer I wanted without clarifying questions.
replies(1): >>45107752 #
3. kiitos ◴[] No.45107752[source]
revisit your instructions.md and/or user preferences, this is very likely the root cause
replies(1): >>45113784 #
4. koakuma-chan ◴[] No.45113784{3}[source]
Wait what. I use duck.ai, could it be that they put something into the system prompt......