←back to thread

DeepSeek OCR

(github.com)
990 points pierre | 4 comments | | HN request time: 0.001s | source
Show context
breadislove ◴[] No.45643006[source]
For everyone wondering how good this and other benchmarks are:

- the OmniAI benchmark is bad

- Instead check OmniDocBench[1] out

- Mistral OCR is far far behind most Open Source OCR models and even further behind then Gemini

- End to End OCR is still extremely tricky

- composed pipelines work better (layout detection -> reading order -> OCR every element)

- complex table parsing is still extremely difficult

[1]: https://github.com/opendatalab/OmniDocBench

replies(2): >>45643626 #>>45647948 #
hakunin ◴[] No.45643626[source]
Wish someone benchmarked Apple Vision Framework against these others. It's built into most Apple devices, but people don't know you can actually harness it to do fast, good quality OCR for you (and go a few extra steps to produce searchable pdfs, which is my typical use case). I'm very curious where it would fall in the benchmarks.
replies(3): >>45643785 #>>45643798 #>>45645485 #
wahnfrieden ◴[] No.45643785[source]
It is unusable trash for languages with any vertical writing such as Japanese. It simply doesn’t work.
replies(1): >>45644032 #
thekid314 ◴[] No.45644032[source]
Yeah, and fails quickly at anything handwritten.
replies(2): >>45644877 #>>45648073 #
1. wahnfrieden ◴[] No.45648073[source]
LiveText too? It has a newer engine
replies(1): >>45648263 #
2. hakunin ◴[] No.45648263[source]
This is the second comment of yours about LiveText (this is the older one https://news.ycombinator.com/item?id=43192141) — I found that one by complete coincidence because I'm trying to provide a Ruby API for these frameworks. However, I can't find much info on LiveText? What framework is it part of? Do you have any links or any additional info? I found a source where they say it's specifically for screen and camera capturing.
replies(1): >>45648311 #
3. wahnfrieden ◴[] No.45648311[source]
https://developer.apple.com/documentation/visionkit/imageana... VisionKit. Swift-only (as with many new APIs) so lots of people stuck on ObjC bridges simply ignore it.

It does not provide bounding boxes but you can get text.

replies(1): >>45648652 #
4. hakunin ◴[] No.45648652{3}[source]
That's great, I'm going to give this a shot. If you have any more resources please do share. I don't mind Swift-only, because I'm writing little shims with `@_cdecl` for the bridge (don't have much experience here, but hoping this is going to work, leaning on AI for support).