Most active commenters
  • emmanueloga_(4)

←back to thread

169 points Tammilore | 17 comments | | HN request time: 1.213s | source | bottom

Documind is an open-source tool that turns documents into structured data using AI.

What it does:

- Extracts specific data from PDFs based on your custom schema - Returns clean, structured JSON that's ready to use - Works with just a PDF link + your schema definition

Just run npm install documind to get started.

1. emmanueloga_ ◴[] No.42173837[source]
From the source, Documind appears to:

1) Install tools like Ghostscript, GraphicsMagick, and LibreOffice with a JS script. 2) Convert document pages to Base64 PNGs and send them to OpenAI for data extraction. 3) Use Supabase for unclear reasons.

Some issues with this approach:

* OpenAI may retain and use your data for training, raising privacy concerns [1].

* Dependencies should be managed with Docker or package managers like Nix or Pixi, which are more robust. Example: a tool like Parsr [2] provides a Dockerized pdf-to-json solution, complete with OCR support and an HTTP api.

* GPT-4 vision seems like a costly, error-prone, and unreliable solution, not really suited for extracting data from sensitive docs like invoices, without review.

* Traditional methods (PDF parsers with OCR support) are cheaper, more reliable, and avoid retention risks for this particular use case. Although these tools do require some plumbing... probably LLMs can really help with that!

While there are plenty of tools for structured data extraction, I think there’s still room for a streamlined, all-in-one solution. This gap likely explains the abundance of closed-source commercial options tackling this very challenge.

---

1: https://platform.openai.com/docs/models#how-we-use-your-data

2: https://github.com/axa-group/Parsr

replies(5): >>42175186 #>>42176460 #>>42176836 #>>42178185 #>>42195512 #
2. ◴[] No.42175186[source]
3. groby_b ◴[] No.42176460[source]
That's not what [1] says, though? Quoth: "As of March 1, 2023, data sent to the OpenAI API will not be used to train or improve OpenAI models (unless you explicitly opt-in to share data with us, such as by providing feedback in the Playground). "

"Traditional methods (PDF parsers with OCR support) are cheaper, more reliable"

Not sure on the reliability - the ones I'm using all fail at structured data. You want a table extracted from a PDF, LLMs are your friend. (Recommendations welcome)

replies(2): >>42176810 #>>42179086 #
4. niklasd ◴[] No.42176810[source]
We found that for extracting tables, OpenAIs LLMs aren't great. What is working well for us is Docling (https://github.com/DS4SD/docling/)
replies(2): >>42178239 #>>42180258 #
5. brianjking ◴[] No.42176836[source]
OpenAI isn't retaining your details sent via the API for training details. Stopp.
6. themanmaran ◴[] No.42178185[source]
Disappointed to see this is an exact rip of our open source tool zerox [1]. With no attribution. They also took the MIT License and changed it out for an AGPL.

If you inspect the source code, it's a verbatim copy. They literally just renamed the ZeroxOutput to DocumindOutput [2][3]

[1] https://github.com/getomni-ai/zerox

[2] https://github.com/DocumindHQ/documind/blob/main/core/src/ty...

[3] https://github.com/getomni-ai/zerox/blob/main/node-zerox/src...

replies(3): >>42178533 #>>42178736 #>>42200734 #
7. soci ◴[] No.42178239{3}[source]
agreed, extracting tables in pdfs using any of the available openAI models has been a waste of prompting time here too.
8. alchemist1e9 ◴[] No.42178533[source]
Are there any reputation mechanisms or github flagging systems to alert users to such scams?

It’s a pretty unethical behavior if what you describe is the full story and as a user of many open source projects how can one be aware of this type of behavior?

9. Tammilore ◴[] No.42178736[source]
Hello. I apologize that it came across this way. This was not the intention. Zerox was definitely used and I made sure to copy and include the MIT license exactly as it was inside the part of the code that uses Zerox.

If there's any additional thing I can do, please let me know so I would make all amendements immediately.

replies(1): >>42200920 #
10. emmanueloga_ ◴[] No.42179086[source]
> That's not what [1] says, though?

Documind is using https://api.openai.com/v1/chat/completions, check the docs at the end of the long API table [1]:

> * Chat Completions:

> Image inputs via the gpt-4o, gpt-4o-mini, chatgpt-4o-latest, or gpt-4-turbo models (or previously gpt-4-vision-preview) are not eligible for zero retention."

--

1: https://platform.openai.com/docs/models#how-we-use-your-data

replies(1): >>42188577 #
11. emmanueloga_ ◴[] No.42180258{3}[source]
Haven't seen Docling before, it looks great! Thanks for sharing.
12. groby_b ◴[] No.42188577{3}[source]
Thanks for pointing there!

It's still not used for training, though, and the retention period is 30 days. It's... a livable compromise for some(many) use cases.

I kind of get the abuse policy reason for image inputs. It makes sense for multi-turn conversations to require a 1h audio retention, too. I'm just incredibly puzzled why schemas for structured outputs aren't eligible for zero-retention.

replies(1): >>42190973 #
13. emmanueloga_ ◴[] No.42190973{4}[source]
Gotcha, from what I could find online I think you are right. I was conflating data not under zero-retention-policy with data-for-training.
14. sidmo ◴[] No.42195512[source]
If you are looking for the latest/greatest in file processing i'd recommend checking out vision language models. They generate embeddings of the images themselves (as a collection of patches) and you can see query matching displayed as a heatmap over the document. Picks up text that OCR misses. My company DataFog has an open-source demo if you want to try it out: https://github.com/DataFog/vlm-api

If you're looking for an all-in-one solution, little plug for our new platform that does the above and also allows you to create custom 'patterns' that get picked up via semantic search. Uses open-source models by default, can deploy into your internal network. www.datafog.ai. In beta now and onboarding manually. Shoot me an email if you'd like to learn more!

15. dontdoxxme ◴[] No.42200734[source]
For the MIT license to make sense it needs a copyright notice, I don’t actually see one in the original license. It just says “The MIT license” but then the text below references the above copyright notice, which doesn’t exist.

I think both sides here can learn from this, copyright notices are technically not required but when some text references them it is very useful. The original author should have added one. The user of the code could also have asked about the copyright. If this were to go to court having the original license not making sense could create more questions than it should.

tl;dr: add a copyright line at the top of the file when you’re using the MIT license.

16. gmerc ◴[] No.42200920{3}[source]
You took their code, did a search and replace on the product name and you're relicensed the code AGPL?

You're going to have to delete this thing and start over man.

replies(1): >>42203869 #
17. leojaygod ◴[] No.42203869{4}[source]
It appears that the MIT license was correctly included to apply to the zerox code used while the AGPL license applies to their own code. Isn’t this how it should be?