Is there an advantage of using an LLM here?
- Qwen 2.5 VL (72b and 32b)
- Gemma-3 (27b)
- DeepSeek-v3-0324
And a couple weeks ago we got the new mistral-ocr model. We updated our OCR benchmark to include the new models.
We evaluated 1,000 documents for JSON extraction accuracy. Major takeaways:
- Qwen 2.5 VL (72b and 32b) are by far the most impressive. Both landed right around 75% accuracy (equivalent to GPT-4o’s performance). Qwen 72b was only 0.4% above 32b. Within the margin of error.
- Both Qwen models passed mistral-ocr (72.2%), which is specifically trained for OCR.
- Gemma-3 (27B) only scored 42.9%. Particularly surprising given that it's architecture is based on Gemini 2.0 which still tops the accuracy chart.
The data set and benchmark runner is fully open source. You can check out the code and reproduction steps here:
- https://getomni.ai/blog/benchmarking-open-source-models-for-...
Is there an advantage of using an LLM here?
There's some comments I've run across saying Qwen2.5-VL's really good at handwriting recognition.
It'd also be interesting to see how Tesseract compares when trying to OCR more mixed text+graphic media. Some possible examples: high-design magazines with color backgrounds, TikTok posts, maps, cardboard hold-up signs at political gatherings.