←back to thread

An LLM is a lossy encyclopedia

(simonwillison.net)
509 points tosh | 3 comments | | HN request time: 0.744s | source

(the referenced HN thread starts at https://news.ycombinator.com/item?id=45060519)
1. mfalcon ◴[] No.45102289[source]
I think that the natural language understanding capability of current LLMs is undervalued.

To understand what the user meant before LLM's we had to train several NLP+ML models in order to get something going but in my experience we'll never get close to what LLM's do now.

I remember the first time I tried ChatGPT and I was surprised by how well it understood every input.

replies(1): >>45102331 #
2. Zigurd ◴[] No.45102331[source]
It's parsing. It's tokenizing. But it's a stretch to call it understanding. It creates a pattern that it can use to compose a response. Ensuring the response is factual is not fundamental to LLM algorithms.

In other words, it's not thinking. The fact that it can simulate a conversation between thinking humans without thinking is remarkable. It should tell us something about the facility for language. But it's not understanding or thinking.

replies(1): >>45103963 #
3. mfalcon ◴[] No.45103963[source]
I know that the "understanding" is a stretch, but I refer to the Understanding of the NLU that wasn't really understanding either.