←back to thread

223 points edunteman | 1 comments | | HN request time: 0.215s | source

Hi HN! Erik here from Pig.dev, and today I'd like to share a new project we've just open sourced:

Muscle Mem is an SDK that records your agent's tool-calling patterns as it solves tasks, and will deterministically replay those learned trajectories whenever the task is encountered again, falling back to agent mode if edge cases are detected. Like a JIT compiler, for behaviors.

At Pig, we built computer-use agents for automating legacy Windows applications (healthcare, lending, manufacturing, etc).

A recurring theme we ran into was that businesses already had RPA (pure-software scripts), and it worked for them in most cases. The pull to agents as an RPA alternative was not to have an infinitely flexible "AI Employees" as tech Twitter/X may want you to think, but simply because their RPA breaks under occasional edge-cases and agents can gracefully handle those cases.

Using a pure-agent approach proved to be highly wasteful. Window's accessibility APIs are poor, so you're generally stuck using pure-vision agents, which can run around $40/hr in token costs and take 5x longer than a human to perform a workflow. At this point, you're better off hiring a human.

The goal of Muscle-Mem is to get LLMs out of the hot path of repetitive automations, intelligently swapping between script-based execution for repeat cases, and agent-based automations for discovery and self-healing.

While inspired by computer-use environments, Muscle Mem is designed to generalize to any automation performing discrete tasks in dynamic environments. It took a great deal of thought to figure out an API that generalizes, which I cover more deeply in this blog: https://erikdunteman.com/blog/muscle-mem/

Check out the repo, consider giving it a star, or dive deeper into the above blog. I look forward to your feedback!

Show context
mindwok ◴[] No.43991197[source]
It's becoming increasingly clear that memory and context are the bottlenecks in advancing usage of AI. I can't help but feel there needs to be a general, perhaps even built into the model, solution for this - everyone seems to be building something on top that is roughly the same thing.
replies(4): >>43991636 #>>43997414 #>>43998967 #>>44001182 #
1. hnuser123456 ◴[] No.43997414[source]
Fine tuning should be combined with inference in some way. However this requires keeping the model loaded at high enough precision for backprop to work.

Instead of hundreds of thousands of us downloading the latest and greatest model that won't fundamentally update one bit until we're graced with the next one, I would think we should all be able to fine-tune the weights so that it can naturally memorize new additional info and preferences without using up context length.