←back to thread

223 points edunteman | 1 comments | | HN request time: 0.21s | source

Hi HN! Erik here from Pig.dev, and today I'd like to share a new project we've just open sourced:

Muscle Mem is an SDK that records your agent's tool-calling patterns as it solves tasks, and will deterministically replay those learned trajectories whenever the task is encountered again, falling back to agent mode if edge cases are detected. Like a JIT compiler, for behaviors.

At Pig, we built computer-use agents for automating legacy Windows applications (healthcare, lending, manufacturing, etc).

A recurring theme we ran into was that businesses already had RPA (pure-software scripts), and it worked for them in most cases. The pull to agents as an RPA alternative was not to have an infinitely flexible "AI Employees" as tech Twitter/X may want you to think, but simply because their RPA breaks under occasional edge-cases and agents can gracefully handle those cases.

Using a pure-agent approach proved to be highly wasteful. Window's accessibility APIs are poor, so you're generally stuck using pure-vision agents, which can run around $40/hr in token costs and take 5x longer than a human to perform a workflow. At this point, you're better off hiring a human.

The goal of Muscle-Mem is to get LLMs out of the hot path of repetitive automations, intelligently swapping between script-based execution for repeat cases, and agent-based automations for discovery and self-healing.

While inspired by computer-use environments, Muscle Mem is designed to generalize to any automation performing discrete tasks in dynamic environments. It took a great deal of thought to figure out an API that generalizes, which I cover more deeply in this blog: https://erikdunteman.com/blog/muscle-mem/

Check out the repo, consider giving it a star, or dive deeper into the above blog. I look forward to your feedback!

Show context
deepdarkforest ◴[] No.43989752[source]
Not sure if this can work. We played around with something similar too for computer use, but comparing embeddings to cache validate the starting position is super gray, no clear threshold. For example, the datetime on the bottom right changes. Or if it's an app with a database etc, it can change the embeddings arbitrarily. Also, you must do this in every step, because as you said, things might break at any point. I just don't see how you can reliably validate. If anything, if models are cheap, you could use another cheaper llm call to compare screenshots, or adjust the playwright/api script on the fly. We ended up writing up a quite different approach that worked surprisingly well.

There are definitely a lot of potential solutions, I'm curious where this goes. IMO an embeddings approach won't be enough. I'm more than happy to discuss what we did internally to achieve a decent rate though, the space is super promising for sure.

replies(2): >>43989825 #>>43990179 #
1. arathis ◴[] No.43989825[source]
Hey, working on a personal project. Would love to dig into how you approached this.