←back to thread

293 points lapnect | 1 comments | | HN request time: 0.211s | source
Show context
notsylver ◴[] No.42154841[source]
I've been doing a lot of OCR recently, mostly digitising text from family photos. Normal OCR models are terrible at it, LLMs do far better. Gemini Flash came out on top from the models I tested and it wasn't even close. It still had enough failures and hallucinations to make it faster to write it in by hand. Annoying considering how close it feels to working.

This seems worse. Sometimes it replies with just the text, sometimes it replies with a full "The image is a scanned document with handwritten text...". I was hoping for some fine tuning or something for it to beat Gemini Flash, it would save me a lot of time. :(

replies(7): >>42154901 #>>42155002 #>>42155087 #>>42155372 #>>42155438 #>>42156428 #>>42156646 #
8n4vidtmkvmk ◴[] No.42155002[source]
That's a bummer. I'm trying to do the exact same thing right now, digitize family photos. Some of mine have German on the back. The last OCR to hit headlines was terrible, was hoping this would be better. ChatGPT 4o has been good though, when I paste individual images into the chat. I haven't tried with the API yet, not sure how much that would cost me to process 6500 photos, many of which are blank but I don't have an easy way to filter them either.
replies(2): >>42155142 #>>42155260 #
bosie ◴[] No.42155142[source]
Use a local rubbish model to extract text. If it doesn’t find any on the back, don’t send it to chatgtp?

Terrascan comes to mind

replies(1): >>42159947 #
8n4vidtmkvmk ◴[] No.42159947[source]
"Terrascan" is a vision model? The only hits I'm getting are for a static code analyzer.
replies(1): >>42176149 #
1. bosie ◴[] No.42176149[source]
sorry, i meant "Tesseract"