←back to thread

357 points ingve | 3 comments | | HN request time: 0.206s | source
Show context
xnx ◴[] No.43974208[source]
Weird that there's no mention of LLMs in this article even though the article is very recent. LLMs haven't solved every OCR/document data extraction problem, but they've dramatically improved the situation.
replies(5): >>43974229 #>>43974325 #>>43974337 #>>43974562 #>>43975686 #
1. marginalia_nu ◴[] No.43974337[source]
Author here: LLMs are definitely the new gold standard for smaller bodies of shorter documents.

The article is in the context of an internet search engine, the corpus to be converted is of order 1 TB. Running that amount of data through an LLM would be extremely expensive, given the relatively marginal improvement in outcome.

replies(2): >>43974639 #>>43977353 #
2. mediaman ◴[] No.43974639[source]
Corpus size doesn't mean much in the context of a PDF, given how variable that can be per page.

I've found Google's Flash to cut my OCR costs by about 95+% compared to traditional commercial offerings that support structured data extraction, and I still get tables, headers, etc from each page. Still not perfect, but per page costs were less than one tenth of a cent per page, and 100 gb collections of PDFs ran to a few hundreds of dollars.

3. noosphr ◴[] No.43977353[source]
A PDF corpus with a size of 1tb can mean anything from 10,000 really poorly scanned documents to 1,000,000,000 nicely generated latex pdfs. What matters is the number of documents, and the number of pages per document.

For the first I can run a segmentation model + traditional OCR in a day or two for the cost of warming my office in winter. For the second you'd need a few hundred dollars and a cloud server.

Feel free to reach out. I'd be happy to have a chat and do some pro-bono work for someone building a open source tool chain and index for the rest of us.