←back to thread

133 points bloppe | 3 comments | | HN request time: 0.642s | source

I've been working with the Featureform team on their new open-source project, [EnrichMCP][1], a Python ORM framework that helps AI agents understand and interact with your data in a structured, semantic way.

EnrichMCP is built on top of [MCP][2] and acts like an ORM, but for agents instead of humans. You define your data model using SQLAlchemy, APIs, or custom logic, and EnrichMCP turns it into a type-safe, introspectable interface that agents can discover, traverse, and invoke.

It auto-generates tools from your models, validates all I/O with Pydantic, handles relationships, and supports schema discovery. Agents can go from user → orders → product naturally, just like a developer navigating an ORM.

We use this internally to let agents query production systems, call APIs, apply business logic, and even integrate ML models. It works out of the box with SQLAlchemy and is easy to extend to any data source.

If you're building agentic systems or anything AI-native, I'd love your feedback. Code and docs are here: https://github.com/featureform/enrichmcp. Happy to answer any questions.

[1]: https://github.com/featureform/enrichmcp

[2]: https://modelcontextprotocol.io/introduction

Show context
polskibus ◴[] No.44321614[source]
This looks very interesting but I’m not sure how to use it well. Would you mind sharing some prompts that use it and solve a real problem that you encountered ?
replies(1): >>44321807 #
simba-k ◴[] No.44321807[source]
Imagine you're building a support agent for DoorDash. A user asks, "Why is my order an hour late?" Most teams today would build a RAG system that surfaces a help center article saying something like, "Here are common reasons orders might be delayed."

That doesn't actually solve the problem. What you really need is access to internal systems. The agent should be able to look up the order, check the courier status, pull the restaurant's delay history, and decide whether to issue a refund. None of that lives in documentation. It lives in your APIs and databases.

LLMs aren't limited by reasoning. They're limited by access.

EnrichMCP gives agents structured access to your real systems. You define your internal data model using Python, similar to how you'd define models in an ORM. EnrichMCP turns those definitions into typed, discoverable tools the LLM can use directly. Everything is schema-aware, validated with Pydantic, and connected by a semantic layer that describes what each piece of data actually means.

You can integrate with SQLAlchemy, REST APIs, or custom logic. Once defined, your agent can use tools like get_order, get_restaurant, or escalate_if_late with no additional prompt engineering.

It feels less like stitching prompts together and more like giving your agent a real interface to your business.

replies(7): >>44321959 #>>44322443 #>>44322755 #>>44322935 #>>44323084 #>>44323147 #>>44325321 #
1. TZubiri ◴[] No.44323084[source]
Cool. Can you give the agent a db user with restricted read permissions?

Also, generic db question, but can you protect against resource overconsumption? Like if the junior/agent makes a query with 100 joins, can a marshall kill the process and time it out?

replies(1): >>44323448 #
2. simba-k ◴[] No.44323448[source]
Yeah to restricted read, still a lot of API work to do here and we're a bit blocked by MCP itself changing its auth spec (was just republished yesterday).

If you use the lower-level enrichMCP API (without SQLAlchemy) you can fully control all retrieval logic and add things like rate limiting, not dissimilar to how you'd solve this problem with a traditional API.

replies(1): >>44329951 #
3. TZubiri ◴[] No.44329951[source]
You could do this out of the MCP protocol, just by making a SQL user account with restricted privileges. I'm assuming at some point you have to give the mcp orm credentials. I think it's easier and more maintainable to just add a doc page tutorial showing how to do it instead of making it part of the dependency. It also reduces the scope of the library.