←back to thread

An LLM is a lossy encyclopedia

(simonwillison.net)
509 points tosh | 1 comments | | HN request time: 0.201s | source

(the referenced HN thread starts at https://news.ycombinator.com/item?id=45060519)
Show context
thw_9a83c ◴[] No.45100937[source]
Yes, LLM is a lossy encyclopedia with a human-language answering interface. This has some benefits, mostly in terms of convenience. You don't have to browse or read through so many pages of a real encyclopedia to get a quick answer. However, there is also a clear downside. Currently, LLM is unable to judge if your question is formulated incorrectly or if your question opens up more questions that should be answered first. It always jumps to answering something. A real human would assess the questioner first and usually ask for more details before answering. I feel this is the predominant reason why LLM answers feel so dumb at times. It never asks for clarification.
replies(2): >>45101167 #>>45102521 #
1. coffeefirst ◴[] No.45102521[source]
This is also why the Kagi Assistant is still be the AI tool I’ve found. The failure state is the same as a search results, it either can’t find anything, finds something irrelevant, or finds material that contradicts the premise of your question.

It seems to me the more you can pin it to another data set, the better.