Most active commenters
  • hakunin(8)
  • CaptainOfCoit(6)
  • wahnfrieden(3)
  • coder543(3)

←back to thread

DeepSeek OCR

(github.com)
990 points pierre | 24 comments | | HN request time: 0.001s | source | bottom
Show context
breadislove ◴[] No.45643006[source]
For everyone wondering how good this and other benchmarks are:

- the OmniAI benchmark is bad

- Instead check OmniDocBench[1] out

- Mistral OCR is far far behind most Open Source OCR models and even further behind then Gemini

- End to End OCR is still extremely tricky

- composed pipelines work better (layout detection -> reading order -> OCR every element)

- complex table parsing is still extremely difficult

[1]: https://github.com/opendatalab/OmniDocBench

replies(2): >>45643626 #>>45647948 #
1. hakunin ◴[] No.45643626[source]
Wish someone benchmarked Apple Vision Framework against these others. It's built into most Apple devices, but people don't know you can actually harness it to do fast, good quality OCR for you (and go a few extra steps to produce searchable pdfs, which is my typical use case). I'm very curious where it would fall in the benchmarks.
replies(3): >>45643785 #>>45643798 #>>45645485 #
2. wahnfrieden ◴[] No.45643785[source]
It is unusable trash for languages with any vertical writing such as Japanese. It simply doesn’t work.
replies(1): >>45644032 #
3. CaptainOfCoit ◴[] No.45643798[source]
Yeah, if it was cross-platform maybe more people would be curious about it, but something that can only run on ~10% of the hardware people have doesn't make it very attractive to even begin to spend time on Apple-exclusive stuff.
replies(2): >>45644313 #>>45644771 #
4. thekid314 ◴[] No.45644032[source]
Yeah, and fails quickly at anything handwritten.
replies(2): >>45644877 #>>45648073 #
5. ch1234 ◴[] No.45644313[source]
But you can have an apple device deployed in your stack to handle the OCR, right? I get on-device is a hardware limitation for many, but if you have an apple device in your stack, can’t you leverage this?
replies(1): >>45645344 #
6. hakunin ◴[] No.45644771[source]
10% of hardware is an insanely vast amount, no?
replies(1): >>45645352 #
7. hakunin ◴[] No.45644877{3}[source]
I mostly OCR English, so Japanese (as mentioned by parent) wouldn't be an issue for me, but I do care about handwriting. See, these insights are super helpful. If only there was, say, a benchmark to show these.

My main question really is: what are practical OCR tools that I can string together on my MacBook Pro M1 Max w/ 64GB Ram to maximize OCR quality for lots of mail and schoolwork coming into my house, all mostly in English.

I use ScanSnap Manager with its built in OCR tools, but that's probably super outdated by now. Apple Vision does way better job than that. I heard people say also that Apple Vision is better than Tesseract. But is there something better still that's also practical to run in a scripted environment on my machine?

8. CaptainOfCoit ◴[] No.45645344{3}[source]
Yeah, but handling macOS is a infrastructure-capacity sucks, Apple really doesn't want you to so tooling is almost none existing. I've setup CI/CD stacks before that needed macOS builders and it's always the most cumbersome machines to manage as infrastructure.
replies(1): >>45645876 #
9. CaptainOfCoit ◴[] No.45645352{3}[source]
Well, it's 90% less than what everyone else uses, so even if the total number is big, relatively it has a small user-base.
replies(1): >>45645529 #
10. graeme ◴[] No.45645485[source]
Interesting. How do you harness it for that purpose? I've found apple ocr to be very good.
replies(2): >>45645618 #>>45653471 #
11. hakunin ◴[] No.45645529{4}[source]
I don’t think 10% of anything would be considered relatively small even if we talk about 10 items: literally there’s only 10 items and this 1 has the rare quality of being among 10. Let alone billions of devices. Unless you want to reduce it to tautology, and instead of answering “why it’s not benchmarked” just go for “10 is smaller than 90, so I’m right”.

My point is, I don’t think any comparative benchmark would ever exclude something based on “oh it’s just 10%, who cares.” I think the issue is more that Apple Vision Framework is not well known as an OCR option, but maybe it’s starting to change.

And another part of the irony is that Apple’s framework probably gets way more real world usage in practice than most of the tools in that benchmark.

replies(1): >>45645708 #
12. hakunin ◴[] No.45645618[source]
The short answer is a tool like OwlOCR (which also has CLI support). The long answer is that there are tools on github (I created the stars list: https://github.com/stars/maxim/lists/apple-vision-framework/) that try to use the framework for various things. I’m also trying to build an ffi-based Ruby gem that provides convenient access in Ruby to the framework’s functionality.
13. CaptainOfCoit ◴[] No.45645708{5}[source]
The initial wish was that more people cared about Apple Vision Framework, I'm merely claiming that since most people don't actually have Apple hardware, they're avoiding Apple technology as it commonly only runs on Apple hardware.

So I'm not saying it should be excluded because it's can only used by relatively few people, but I was trying to communicate that I kind of get why not so many people care about it and why it gets forgotten, since most people wouldn't be able to run it even if they wanted to.

Instead, something like DeepSeek OCR could be deployed on any of the three major OSes (assuming there is implementations of the architecture available), so of course it gets a lot more attention and will be included in way more benchmarks.

replies(1): >>45646063 #
14. coder543 ◴[] No.45645876{4}[source]
AWS literally lets you deploy Macs as EC2 instances, which I believe includes all of AWS's usual EBS storage and disk imaging features.
replies(1): >>45646292 #
15. hakunin ◴[] No.45646063{6}[source]
I get what you're saying, I'm just disagreeing with your thought process. By that logic benchmarks would also not include the LLMs that they did, since most people wouldn't be able to run those either (it takes expensive hardware). In fact, more people would probably be able to run Vision framework than those LLMs, for cheaper (Vision is even on iPhones). I'm more inclined to agree if you say "maybe people just don't like Apple". :)
16. CaptainOfCoit ◴[] No.45646292{5}[source]
Alright, so now the easy thing is done, now how do you actually manage them, keep them running and do introspection without resorting to SSH or even remote desktop?
replies(1): >>45646313 #
17. coder543 ◴[] No.45646313{6}[source]
How do you manage any EC2 instance “without resorting to SSH”? Even for Linux EC2 instances, the right answer is often tools like Ansible, which do still use SSH under the hood.
replies(1): >>45647232 #
18. CaptainOfCoit ◴[] No.45647232{7}[source]
You usually provision them via images, that they then either install from or boot from directly. Not to mention there are countless of infrastructure software to run that works for at least Linux, sometimes Windows and seldom even macOS.
replies(1): >>45647288 #
19. coder543 ◴[] No.45647288{8}[source]
I specifically mentioned the imaging capability of EBS for Mac, which you dismissed as the easy part. Now you’re claiming that is the main thing? Well, good news!

And yes, Ansible (among other tools) can be used to manage macOS.

This discussion doesn’t seem productive. You have a preconceived view point, and you’re not actually considering the problem or even doing 5 seconds of googling.

Managing a Mac fleet on AWS isn’t a real problem. If Apple’s OCR framework were significantly above the competition, it could easily be used. I would like to see benchmarks of it, as the other person was also asking for.

20. wahnfrieden ◴[] No.45648073{3}[source]
LiveText too? It has a newer engine
replies(1): >>45648263 #
21. hakunin ◴[] No.45648263{4}[source]
This is the second comment of yours about LiveText (this is the older one https://news.ycombinator.com/item?id=43192141) — I found that one by complete coincidence because I'm trying to provide a Ruby API for these frameworks. However, I can't find much info on LiveText? What framework is it part of? Do you have any links or any additional info? I found a source where they say it's specifically for screen and camera capturing.
replies(1): >>45648311 #
22. wahnfrieden ◴[] No.45648311{5}[source]
https://developer.apple.com/documentation/visionkit/imageana... VisionKit. Swift-only (as with many new APIs) so lots of people stuck on ObjC bridges simply ignore it.

It does not provide bounding boxes but you can get text.

replies(1): >>45648652 #
23. hakunin ◴[] No.45648652{6}[source]
That's great, I'm going to give this a shot. If you have any more resources please do share. I don't mind Swift-only, because I'm writing little shims with `@_cdecl` for the bridge (don't have much experience here, but hoping this is going to work, leaning on AI for support).
24. ah27182 ◴[] No.45653471[source]
Apple shortcuts allows you to use OCR on images you pass into it. Looking for “ Extract Text from Image”