←back to thread

An LLM is a lossy encyclopedia

(simonwillison.net)
509 points tosh | 1 comments | | HN request time: 0.198s | source

(the referenced HN thread starts at https://news.ycombinator.com/item?id=45060519)
Show context
GuB-42 ◴[] No.45101186[source]
There are a lot of parallels between AI and compression.

In fact the best compression algorithms and LLMs have in common that they work by predicting the next word. Compression algorithms take an extra step called entropy coding to encode the difference between the prediction and the actual data efficiently, and the better the prediction, the better the compression ratio.

What makes a LLM "lossy" is that you don't have the "encode the difference" step.

And yes, it means you can turn a LLM into a (lossless) compression algorithm, and I think a really good one in term of compression ratio on huge data sets. You can also turn a compression algorithm like gzip into a language model! A very terrible one, but the output is better than a random stream of bytes.

replies(3): >>45101276 #>>45102534 #>>45103227 #
1. jparishy ◴[] No.45103227[source]
I suspect this ends up being pretty important for the next advancements in AI, specifically LLM-based AI. To me, the transformer architecture is a sort of compression algorithm that is being exploited for emergent behavior at the margins. But I think this is more like stream of consciousness than premeditated thought. Eventually I think we figure out a way to "think" in latent space and have our existing AI models be just the mouthpiece.

In my experience as a human, the more you know about a subject, or even the more you have simply seen content about it, the easier it is to ramble on about it convincingly. It's like a mirroring skill, and it does not actually mean you understand what you're saying.

LLMs seem to do the same thing, I think. At scale this is widely useful, though, I am not discounting it. Just think it's an order of magnitude below what's possible and all this talk of existing stream-of-consciousness-like LLMs creating AGI seems like a miss