←back to thread

1303 points serjester | 1 comments | | HN request time: 0.208s | source
Show context
llm_trw ◴[] No.42955414[source]
This is using exactly the wrong tools at every stage of the OCR pipeline, and the cost is astronomical as a result.

You don't use multimodal models to extract a wall of text from an image. They hallucinate constantly the second you get past perfect 100% high-fidelity images.

You use an object detection model trained on documents to find the bounding boxes of each document section as _images_; each bounding box comes with a confidence score for free.

You then feed each box of text to a regular OCR model, also gives you a confidence score along with each prediction it makes.

You feed each image box into a multimodal model to describe what the image is about.

For tables, use a specialist model that does nothing but extract tables—models like GridFormer that aren't hyped to hell and back.

You then stitch everything together in an XML file because Markdown is for human consumption.

You now have everything extracted with flat XML markup for each category the object detection model knows about, along with multiple types of probability metadata for each bounding box, each letter, and each table cell.

You can now start feeding this data programmatically into an LLM to do _text_ processing, where you use the XML to control what parts of the document you send to the LLM.

You then get chunking with location data and confidence scores of every part of the document to put as meta data into the RAG store.

I've build a system that read 500k pages _per day_ using the above completely locally on a machine that cost $20k.

replies(17): >>42955515 #>>42956087 #>>42956247 #>>42956265 #>>42956619 #>>42957414 #>>42958781 #>>42958962 #>>42959394 #>>42960744 #>>42960927 #>>42961296 #>>42961613 #>>42962243 #>>42962387 #>>42965540 #>>42983927 #
eitally ◴[] No.42959394[source]
Fwiw, I'm not convinced Gemini isn't using an document-based objection detection model for this, at least some parts of this or for some doc categories (especially common things like IDs, bills, tax forms, invoices & POs, shipping documents, etc that they've previously created document extractors for (as part of their DocAI cloud service).
replies(1): >>42959434 #
1. simonw ◴[] No.42959434[source]
I don't see why they would do that. The whole point of training a model like Gemini is that you train the model - if they want it to work great against those different categories of document the likely way to do it is to add a whole bunch of those documents to Gemini's regular training set.