←back to thread

Building Effective AI Agents

(www.anthropic.com)
543 points Anon84 | 1 comments | | HN request time: 0.21s | source
Show context
AvAn12 ◴[] No.44303213[source]
How do agents deal with task queueing, race conditions, and other issues arising from concurrency? I see lots of cool articles about building workflows of multiple agents - plus what feels like hand-waving around declaring an orchestrator agent to oversee the whole thing. And my mind goes to whether there needs to be some serious design considerations and clever glue code. Or does it all work automagically?
replies(7): >>44303413 #>>44303510 #>>44303611 #>>44303637 #>>44303642 #>>44304027 #>>44304092 #
1. rdedev ◴[] No.44304092[source]
This is why I am leaning towards making the llm generate code that calls operates on took calls instead of having everything in JSON.

Huggingfaces's smolagents library makes the llm generate python code where tools are just normal python functions. If you want parallel tools calls just prompt the llm to do so. It should take care of synchronizing everything. Ofcourse there is the whole issue around executing llm generated code but we have a few solutions for that