←back to thread

DeepSeek OCR

(github.com)
990 points pierre | 2 comments | | HN request time: 0s | source
Show context
krackers ◴[] No.45640720[source]
The paper is more interesting than just another VLM for OCR, they start talking about compression and stuff. E.g. there is this quote

>Our work represents an initial exploration into the boundaries of vision-text compression, investigating how many vision tokens are required to decode text tokens. The preliminary results are encouraging: DeepSeek-OCR achieves near-lossless OCR compression at approximately 10× ratios, while 20× compression still retains 60% accuracy.

(I guess you could say a picture token is worth 10 textual tokens...)

Could someone explain to a noob what the information-theoretic intuition is here? Why does this work, is it that text tokens are still too "granular"/repetitive and don't come close to the ideal entropy coding? Or is switching to vision tokens escaping the limitation of working "one word-ish at a time", allowing you to get closer to entropy (similar to the way that arithmetic encoding does compared to huffman codes)?

And then they start talking about handling long-context by literally(?) downscaling images, forming a correspondence between information loss in the textual domain and the image domain.

replies(7): >>45640731 #>>45641225 #>>45642325 #>>45642598 #>>45643765 #>>45645167 #>>45651976 #
looobay ◴[] No.45640731[source]
LLMs are compute heavy with quadratic scaling (in compute) per tokens. They are trying to compress text tokens into visual tokens with their VLM.

Maybe they would render texts to an image before tokenizing to reduce the compute cost.

replies(1): >>45640755 #
krackers ◴[] No.45640755[source]
But naively wouldn't you expect the representation of a piece of text in terms of vision tokens to be roughly the same number of bits (or more) than the representation as textual token? You're changing representation sure, but that by itself doesn't give you any compute advantages unless there is some sparsity/compressability you can take advantage of in the domain you transform to right?

So I guess my question is where is the juice being squeezed from, why does the vision token representation end up being more efficient than text tokens.

replies(6): >>45640784 #>>45640804 #>>45640859 #>>45641233 #>>45641253 #>>45645668 #
1. imjonse ◴[] No.45640804{3}[source]
I wonder if text written using chinese characters is more compatible with such vision centric compression than latin text.
replies(1): >>45654598 #
2. Werkzeug ◴[] No.45654598[source]
I think it's not the case. Chinese characters have the highest information entropy of all writing systems. However, Chinese characters are all independent symbols, which means if you want the LLM to support 5000 Chinese characters, you need to put 5000 characters into the lookup table (obviously there's no root, prefix, and suffix in Chinese, you cannot split the character into multiple reusable word pieces). As a result, you may need fewer characters to represent the same meaning compared to latin languages, but LLMs may also need to activate more token embeddings.