←back to thread

43 points Aherontas | 1 comments | | HN request time: 0s | source

Hey all! I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems.

To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: https://www.petrostechchronicles.com/) https://github.com/Aherontas/Pycon_Greece_2025_Presentation_...

The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration.

Features:

- Multiple agents running in containers

- MCP servers (Brave search, GitHub, filesystem, etc.) as tools

- A2A communication between services

- Minimal UI for experimentation for Tech Trend - repo analysis

I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases.

It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems. Big questions:

Do you think agent-to-agent protocols like MCP/A2A will stick?

Or will the future be mostly single powerful LLMs with plugin stacks?

Thanks — excited to hear what the HN crowd thinks!

Show context
colonCapitalDee ◴[] No.45253999[source]
I've been very happy with pydantic-ai, it blows the rest of the python ai ecosystem out of the water
replies(1): >>45254783 #
gHA5 ◴[] No.45254783[source]
Are you using Pydantic AI for structured output? If so, have you also tried instructor?
replies(3): >>45255708 #>>45255761 #>>45256775 #
1. colonCapitalDee ◴[] No.45256775[source]
I did look at instructor and probably for structured output pydantic-ai and instructor are about the same, but pydantic-ai supports a ton of other stuff that isn't part of instructor's feature set. For me killer apps were the ability to serialize/deserialize conversations as json; frictionless tool-calling; and ability to mock the LLM client for testing.