←back to thread

311 points joshdickson | 1 comments | | HN request time: 0.215s | source

Hi HN!

Today I’m excited to launch OpenNutrition: a free, ODbL-licenced nutrition database of everyday generic, branded, and restaurant foods, a search engine that can browse the web to import new foods, and a companion app that bundles the database and search as a free macro tracking app.

Consistently logging the foods you eat has been shown to support long-term health outcomes (1)(2), but doing so easily depends on having a large, accurate, and up-to-date nutrition database. Free, public databases are often out-of-date, hard to navigate, and missing critical coverage (like branded restaurant foods). User-generated databases can be unreliable or closed-source. Commercial databases come with ongoing, often per-seat licensing costs, and usage restrictions that limit innovation.

As an amateur powerlifter and long-term weight loss maintainer, helping others pursue their health goals is something I care about deeply. After exiting my previous startup last year, I wanted to investigate the possibility of using LLMs to create the database and infrastructure required to make a great food logging app that was cost engineered for free and accessible distribution, as I believe that the availability of these tools is a public good. That led to creating the dataset I’m releasing today; nutritional data is public record, and its organization and dissemination should be, too.

What’s in the database?

- 5,287 common everyday foods, 3,836 prepared and generic restaurant foods, and 4,182 distinct menu items from ~50 popular US restaurant chains; foods have standardized naming, consistent numeric serving sizes, estimated micronutrient profiles, descriptions, and citations/groundings to USDA, AUSNUT, FRIDA, CNF, etc, when possible.

- 313,442 of the most popular US branded grocery products with standardized naming, parsed serving sizes, and additive/allergen data, grounded in branded USDA data; the most popular 1% have estimated micronutrient data, with the goal of full coverage.

Even the largest commercial databases can be frustrating to work with when searching for foods or customizations without existing coverage. To solve this, I created a real-time version of the same approach used to build the core database that can browse the web to learn about new foods or food customizations if needed (e.g., a highly customized Starbucks order). There is a limited demo on the web, and in-app you can log foods with text search, via barcode scan, or by image, all of which can search the web to import foods for you if needed. Foods discovered via these searches are fed back into the database, and I plan to publish updated versions as coverage expands.

- Search & Explore: https://www.opennutrition.app/search

- Methodology/About: https://www.opennutrition.app/about

- Get the iOS App: https://apps.apple.com/us/app/opennutrition-macro-tracker/id...

- Download the dataset: https://www.opennutrition.app/download

OpenNutrition’s iOS app offers free essential logging and a limited number of agentic searches, plus expenditure tracking and ongoing diet recommendations like best-in-class paid apps. A paid tier ($49/year) unlocks additional searches and features (data backup, prioritized micronutrient coverage for logged foods), and helps fund further development and broader library coverage.

I’d love to hear your feedback, questions, and suggestions—whether it’s about the database itself, a really great/bad search result, or the app.

1. Burke et al., 2011, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3268700/

2. Patel et al., 2019, https://mhealth.jmir.org/2019/2/e12209/

Show context
yamihere ◴[] No.43570161[source]
>> User-generated databases can be unreliable

>> Foods discovered via these searches are fed back into the database,

Aren’t LLMs also unreliable? How do you ensure the new content is from an authoritative, accurate source? How do you ensure the numbers that make it into the database are actually what the source provided?

According to the Methodology/About page

>> The LLM is tasked with creating complete nutritional values, explicitly explaining the rationale behind each value it generates. Outputs undergo rigorous validation steps,

Those rigorous validation steps were also created with LLMs, correct?

>> whose core innovations leveraged AI but didn’t explicitly market themselves as “AI products.”

Odd choice for an entirely AI based service. First thought I had after reading that was: must be because people don’t trust AI generated information. Seems disengenuous to minimize the AI aspect in marketing while this product only exists because of AI.

Great idea though, thanks for giving it a shot!

replies(2): >>43570320 #>>43570409 #
rob ◴[] No.43570320[source]
Not really sure how the author thinks anybody who tracks their calories/macros seriously is going to trust a website that literally just makes up values for the vitamins, minerals, etc:

> TL;DR: They are estimates from giving an LLM (generally o3 mini high due to cost, some o1 preview) a large corpus of grounding data to reason over and asking it to use its general world knowledge to return estimates it was confident in, which, when escalating to better LLMs like o1-pro and manual verification, proved to be good enough that I thought they warranted release.

replies(3): >>43570711 #>>43570741 #>>43571180 #
1. joshdickson ◴[] No.43570741[source]
I have tracked my macro intake seriously for years and use the database every day, as do many folks who used the initial app releases. It's actually more valuable to me to have the data in this format, even estimated, because what happens with other apps is you get gaps in macronutrient reporting on things like Omega 3's, and you wonder 'Am I not eating any Omega 3's or does the database containing the food I ate just not include them?'. In that case I'd much rather have an LLM that had access to as much relevant data as I could feed it reason through approximate nutrient distribution and give me the best estimate it could.

Appreciate the feedback!