- It’s a chat completions compatible endpoint, making it easy to drop into existing agents with a custom base_url
- The cache is template-aware, meaning lookups can treat dynamic content (names, addresses, etc.) as variables
You can see it in action in this demo where it memorizes tic-tac-toe games: https://www.youtube.com/watch?v=PWbyeZwPjuY
Why we built this: before Butter, we were Pig.dev (YC W25), where we built computer-use agents to automate legacy Windows applications. The goal was to replace RPA. But in practice, these agents were slow, expensive, and unpredictable - a major downgrade from deterministic RPA, and unacceptable in the worlds of healthcare, lending, and government. We realized users don't want to replace RPA with AI, they just want AI to handle the edge cases.
We set out to build a system for "muscle memory" for AI automations (general purpose, not just computer-use), where agent trajectories get baked into reusable code. You may recall our first iteration of this in May, a library called Muscle Mem: https://news.ycombinator.com/item?id=43988381
Today we're relaunching it as a chat completions proxy. It emulates scripted automations by storing observed message histories in a tree structure, where each fork in the tree represents some conditional branch in the workflow's "code". We replay behaviors by walking the agent down the tree, falling back to AI to add new branches if the next step is not yet known.
The proxy is live and free to use while we work through making the template-aware engine more flexible and accurate. Please try it out and share how it went, where it breaks, and if it’s helpful.
It’s a good idea
Thanks for the nice words!
what are some use cases where you need deterministic caching?
Is this used only in cases where you assume the answer from your first call is correct?
It’s useful but it has limitations, it seems to only work well in environments that are perfectly predictable otherwise it gets in the way of the agent.
I think I prefer RL over these approaches but it requires a bit more data.
Wrote more on that here: https://blog.butter.dev/the-messy-world-of-deterministic-age...
Right now, we assume first call is correct, and will eagerly take the first match we find while traversing the tree.
One of the worst things that could currently happen is we cache a bad run, and now instead of occasional failures you’re given 100% failures.
A few approaches we’ve considered - maintain a staging tree, and only promote to live if multiple sibling nodes (messages) look similar enough. Decision to promote could be via tempting, regex, fuzzy, semantic, or LLM-judged - add some feedback APIs for a client to score end-to-end runs so that path could develop some reputation
In those cases perhaps an open source (maybe even local) version would make more sense. For our hosted version we’d need to charge something, given storage requirements to run such a service, but especially for local models that feels wrong. I’ve been considering open source for this reason.