←back to thread

56 points maxmaio | 1 comments | | HN request time: 0.209s | source

Hey HN, we are Max, Kieran, and Aahel from Midship (https://midship.ai). Midship makes it easy to extract data from unstructured documents like pdfs and images.

Here’s a video showing it in action: https://www.loom.com/share/ae43b6abfcc24e5b82c87104339f2625?..., and a demo playground (no signup required!) to test it out: https://app.midship.ai/demo

We started 5 months ago initially trying to make an AI natural language workflow builder that would be a simpler alternative to Zapier or Make.com. However, most of our users seemed to be much more interested in the basic (and not very good) document extraction feature we had. Seeing how people were spending hours a day manually extracting data from pdfs inspired us to build what has become Midship!

The problem is that despite all our progress in software, huge amounts of business data still lives in PDFs and images. Sure, you can OCR them, but getting clean, structured data out is still painful. Most existing tools just give you a blob of markdown - leaving you to figure out which parts matter and how they relate.

We've found that combining OCR with language models lets us do something more useful: extract specific fields and tables that users actually care about. The LLMs help correct OCR mistakes and understand context (like knowing that "Inv#" and "Invoice Number" mean the same thing).

We have two main kinds of users today, non-technical users that extract data via our web app and developers who use our extraction api. We were initially focused on the first one as they seemed like an underserved part of the market, but we’ve received a lot of interest from developers who face the same issues.

For pricing, we currently charge a monthly Saas fee per seat for the web app and a volume based pricing for the API.

We’re really excited to share what we’ve built so far and look forward to any feedback from the community!

Show context
hubraumhugo ◴[] No.42067242[source]
Congrats on the launch! A quick search in the YC startup directory brought up 5-10 companies doing pretty much the same thing:

- https://www.ycombinator.com/companies/tableflow

- https://www.ycombinator.com/companies/reducto

- https://www.ycombinator.com/companies/mindee

- https://www.ycombinator.com/companies/omniai

- https://www.ycombinator.com/companies/trellis

At the same time, accurate document extraction is becoming a commodity with powerful VLMs. Are you planning to focus on a specific industry, or how do you plan to differentiate?

replies(7): >>42067424 #>>42067521 #>>42067529 #>>42067560 #>>42067808 #>>42068776 #>>42071352 #
tlofreso ◴[] No.42067560[source]
"accurate document extraction is becoming a commodity with powerful VLMs"

Agree.

The capability is fairly trivial for orgs with decent technical talent. The tech / processes all look similar:

User uploads file --> Azure prebuilt-layout returns .MD --> prompt + .MD + schema set to LLM --> JSON returned. Do whatever you want with it.

replies(2): >>42068181 #>>42069425 #
Kiro ◴[] No.42069425[source]
Why all those steps? Why not just file + prompt to JSON directly?
replies(1): >>42070089 #
1. tlofreso ◴[] No.42070089[source]
Having the text (for now) is still pretty important for quality output. The vision models are quite good, but not a replacement for a quality OCR step. A combination of Text + Vision is compelling too.