←back to thread

469 points samuelstros | 1 comments | | HN request time: 0s | source
Show context
ahmedhawas123 ◴[] No.44999186[source]
Thanks for sharing this. At a time where this is a rush towards multi-agent systems, this is helpful to see how an LLM-first organization is going after it. Lots of the design aspects here are things I experiment with day to day so it's good to see others use it as well

A few takeaways for me from this (1) Long prompts are good - and don't forget basic things like explaining in the prompt what the tool is, how to help the user, etc (2) Tool calling is basic af; you need more context (when to use, when not to use, etc) (3) Using messages as the state of the memory for the system is OK; i've thought about fancy ways (e.g., persisting dataframes, parsing variables between steps, etc, but seems like as context windows grow, messages should be ok)

replies(2): >>44999771 #>>45001126 #
1. chazeon ◴[] No.45001126[source]
I want to note that: long prompts are good only if the model is optimized for it. I have tried to swap the underlying model for Claude Code. Most local models, even those claimed to work with long context and tool use, don't work well when instruction becomes too long. This has become an issue for tool use, where tool use works well in small ChatBot-type conversation demos, but when Claude's code-level prompt length increases, it just fails, either forgetting what tools are there, forgetting to use them, or returning in the wrong formats. Only the model by OpenAI, Google's Gemini, kind of works, but not as well as Anthropic's own models. Besides they feel much slower.