←back to thread

DeepSeek OCR

(github.com)
990 points pierre | 1 comments | | HN request time: 0.2s | source
Show context
dumpsterkid ◴[] No.45644667[source]
I haven't fired this up yet to try but I've been evaluating & working with quite a few different VLMs from the small granite, qwen etc models up to the larger VLMs available to see if we can fully replace traditional OCR in our system but I've been disappointed so far - our system takes documents from customers and supplies them back normalized documents (i.e rasterized multi-page bitmaps) marked up as they've requested - however in our use case we need accurate coordinates of data down to the letter/word level and from my experience the positional outputs from these VLMs are either wildly inconsistent, completely hallucinated, or so vague that it doesn't allow us to target anything with any kind of accuracy or granularity.

our solution so far has been to stick to using tesseract with good clean-up routines and then augmenting/fixing-up the output using the VLM OCR text where we don't have structured source document data available

it could be that we just have a very niche use-case and it doesn't matter to most people, I'm sure if you just want a text dump or restructured markdown/html representation of documents these VLMs work well but the number of articles & comments I've seen claiming that these models have 'solved' OCR just seems counter to our experiences

replies(3): >>45644916 #>>45647245 #>>45649480 #
1. sampton ◴[] No.45644916[source]
You can train a cnn to find bounding boxes of text first. Then run VLM on each box.