←back to thread

43 points Aherontas | 1 comments | | HN request time: 0.001s | source

Hey all! I recently gave a workshop talk at PyCon Greece 2025 about building production-ready agent systems.

To check the workshop, I put together a demo repo: (I will add the slides too soon in my blog: https://www.petrostechchronicles.com/) https://github.com/Aherontas/Pycon_Greece_2025_Presentation_...

The idea was to show how multiple AI agents can collaborate using FastAPI + Pydantic-AI, with protocols like MCP (Model Context Protocol) and A2A (Agent-to-Agent) for safe communication and orchestration.

Features:

- Multiple agents running in containers

- MCP servers (Brave search, GitHub, filesystem, etc.) as tools

- A2A communication between services

- Minimal UI for experimentation for Tech Trend - repo analysis

I built this repo because most agent frameworks look great in isolated demos, but fall apart when you try to glue agents together into a real application. My goal was to help people experiment with these patterns and move closer to real-world use cases.

It’s not production-grade, but would love feedback, criticism, or war stories from anyone who’s tried building actual multi-agent systems. Big questions:

Do you think agent-to-agent protocols like MCP/A2A will stick?

Or will the future be mostly single powerful LLMs with plugin stacks?

Thanks — excited to hear what the HN crowd thinks!

Show context
vivzkestrel ◴[] No.45257839[source]
Without using all the jargon can you kindly explain what your project actually does because I did read the repo and still dont get it
replies(1): >>45260993 #
1. Aherontas ◴[] No.45260993[source]
It just builds a ChatGPT like UI that has Agents to serve you either latest news on a specific tech topic (via fetching them via MCP Servers for: Brave Browser, Hackernews and Github for trending repos or repos fetched in the response). It also supports giving you back general knowledge answers. The whole purpose of the repo is to be a general example of how Agents speak with Agents and how to pair them with MCP servers and why we need MCP Servers(for example better run all calculations not in LLM because they hallucinate, but via code and then give the answer back to the LLM). I hope it clarifies what it does now.