←back to thread

208 points themanmaran | 1 comments | | HN request time: 0.199s | source

Last week was big for open source LLMs. We got:

- Qwen 2.5 VL (72b and 32b)

- Gemma-3 (27b)

- DeepSeek-v3-0324

And a couple weeks ago we got the new mistral-ocr model. We updated our OCR benchmark to include the new models.

We evaluated 1,000 documents for JSON extraction accuracy. Major takeaways:

- Qwen 2.5 VL (72b and 32b) are by far the most impressive. Both landed right around 75% accuracy (equivalent to GPT-4o’s performance). Qwen 72b was only 0.4% above 32b. Within the margin of error.

- Both Qwen models passed mistral-ocr (72.2%), which is specifically trained for OCR.

- Gemma-3 (27B) only scored 42.9%. Particularly surprising given that it's architecture is based on Gemini 2.0 which still tops the accuracy chart.

The data set and benchmark runner is fully open source. You can check out the code and reproduction steps here:

- https://getomni.ai/blog/benchmarking-open-source-models-for-...

- https://github.com/getomni-ai/benchmark

- https://huggingface.co/datasets/getomni-ai/ocr-benchmark

Show context
daemonologist ◴[] No.43550948[source]
You mention that you measured cost and latency in addition to accuracy - would you be willing to share those results as well? (I understand that for these open models they would vary between providers, but it would be useful to have an approximate baseline.)
replies(1): >>43551259 #
themanmaran ◴[] No.43551259[source]
Yes, I'll add that to the writeup! You're right, initially excluded it because it was really dependent on the providers, so lots of variance. Especially with the Qwen models.

High level results were:

- Qwen 32b => $0.33/1000 pages => 53s/page

- Qwen 72b => $0.71/1000 pages => 51s/page

- Llama 90b => $8.50/1000 pages => 44s/page

- Llama 11b => $0.21/1000 pages => 08s/page

- Gemma 27b => $0.25/1000 pages => 22s/page

- Mistral => $1.00/1000 pages => 03s/page

replies(2): >>43551589 #>>43551686 #
1. esafak ◴[] No.43551686[source]
A 2d plot would be great