←back to thread

43 points Aherontas | 1 comments | | HN request time: 0.199s | source

Hey all! I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems.

To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: https://www.petrostechchronicles.com/) https://github.com/Aherontas/Pycon_Greece_2025_Presentation_...

The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration.

Features:

- Multiple agents running in containers

- MCP servers (Brave search, GitHub, filesystem, etc.) as tools

- A2A communication between services

- Minimal UI for experimentation for Tech Trend - repo analysis

I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases.

It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems. Big questions:

Do you think agent-to-agent protocols like MCP/A2A will stick?

Or will the future be mostly single powerful LLMs with plugin stacks?

Thanks — excited to hear what the HN crowd thinks!

Show context
tcdent ◴[] No.45254599[source]
Since you're framing this as a learning resource, here are a couple things I see:

Your views are not following a single convention: some of them return dictionaries, some return base JSONResponse objects, and others return properly defined Pydantic schemas. I didn't run the code, but I'd venture to guess your generated documentation is not comprehensive, nor is it cohesive.

I'd also further extend this into your agent services; passing bare dictionaries with arbitrary fields into what is supposed to be a modular logic handler is pretty outdated. You're defining a functional (methods) interface; data structures are the other half of the equation.

This plays into the way that Agents (as in the context of this system, versus Pydantic AI agents) are wrapped arbitrarily. I'd favor making the conversion from a Pydantic agent to a native agent part of the system's interface design, rather than re-implementing a subset of the agentic functions in your own BaseAgent and ending up with an `agent.agent` context.

Also, since this is a web-centric application (that leverages agents) dropping all of your view functions into main.py leaves something to be desired; break up your views into logical modules based on role.

Everyone's learning, and I hope this helps someone in their journey. Kudos for putting your code out there as a resource; some of us can't help ourselves from reading it.

replies(2): >>45254793 #>>45260941 #
1. Aherontas ◴[] No.45260941[source]
Thanks a lot for taking the time to go through the code and leave such a detailed comment, I really appreciate the thoughtful feedback either bad or good, it always helps.

Just to clarify: this repo is meant as a quick demo/learning resource after experimenting with Pydanticai(for some limited time), FastAPI, and agents. It’s not something I’d structure or ship in a production environment or at a real job in any case.

That said, your points are spot on:

- Consistency in return types (Pydantic schemas over mixed dicts/JSONResponse, which was done either because I glued code from other projects I had either from generated so if it is used from anyone in real case it needs refactoring).

Structuring data exchange between agents with typed models instead of raw dicts. (totally correct too)

Avoiding redundant abstractions in the agent base. (I wouldn't agree fully on that as it is an area that anyone can have different opinion on what is a redudant abstraction)

Breaking views into logical modules rather than dropping them all into main.py. (I fully agree again)

These are all best practices I’d absolutely follow in production code and more as the codebase is not 100% structured robustly, and it’s great to see them highlighted here so others reading can also learn from the contrast between “demo” and “real-world” implementations.

Again, thanks for diving in this kind of feedback is exactly what makes sharing experiments valuable.