←back to thread

144 points pranay01 | 1 comments | | HN request time: 0.199s | source
Show context
olliem36 ◴[] No.45400037[source]
We've built a multi-agent system, designed to run complex tasks and workflows with just a single prompt. Prompts are written by non-technical people, can be 10+ pages long...

We've invested heavily in observability having quickly found that observability + evals are the cornerstone to a successful agent.

For example, a few things measure:

1. Task complexity (assessed by another LLM) 2. Success metrics given the task(s) (Agin by other LLMS) 3. Speed of agent runs & tools 4. Errors of tools, inc time outs. 5. How much summarizaiton and chunking occurs between agents and tool results 6. tokens used, cost 7. reasoning, model selected by our dynamic routing..

Thank god its been relatively cheap to build this in house.. our metrics dashboard is essentially a vibe coded react admin site.. but proves absolutely invaluable!

All of this happed after a heavy investment in agent orchestration, context management... it's been quite a ride!

replies(5): >>45400062 #>>45400266 #>>45402025 #>>45403324 #>>45486138 #
1. amelius ◴[] No.45403324[source]
The problem with this approach is that evaluation is another AI task, which has its own problems ...

Chicken and egg.