←back to thread

223 points edunteman | 3 comments | | HN request time: 0.583s | source

Hi HN! Erik here from Pig.dev, and today I'd like to share a new project we've just open sourced:

Muscle Mem is an SDK that records your agent's tool-calling patterns as it solves tasks, and will deterministically replay those learned trajectories whenever the task is encountered again, falling back to agent mode if edge cases are detected. Like a JIT compiler, for behaviors.

At Pig, we built computer-use agents for automating legacy Windows applications (healthcare, lending, manufacturing, etc).

A recurring theme we ran into was that businesses already had RPA (pure-software scripts), and it worked for them in most cases. The pull to agents as an RPA alternative was not to have an infinitely flexible "AI Employees" as tech Twitter/X may want you to think, but simply because their RPA breaks under occasional edge-cases and agents can gracefully handle those cases.

Using a pure-agent approach proved to be highly wasteful. Window's accessibility APIs are poor, so you're generally stuck using pure-vision agents, which can run around $40/hr in token costs and take 5x longer than a human to perform a workflow. At this point, you're better off hiring a human.

The goal of Muscle-Mem is to get LLMs out of the hot path of repetitive automations, intelligently swapping between script-based execution for repeat cases, and agent-based automations for discovery and self-healing.

While inspired by computer-use environments, Muscle Mem is designed to generalize to any automation performing discrete tasks in dynamic environments. It took a great deal of thought to figure out an API that generalizes, which I cover more deeply in this blog: https://erikdunteman.com/blog/muscle-mem/

Check out the repo, consider giving it a star, or dive deeper into the above blog. I look forward to your feedback!

Show context
mindwok ◴[] No.43991197[source]
It's becoming increasingly clear that memory and context are the bottlenecks in advancing usage of AI. I can't help but feel there needs to be a general, perhaps even built into the model, solution for this - everyone seems to be building something on top that is roughly the same thing.
replies(4): >>43991636 #>>43997414 #>>43998967 #>>44001182 #
1. ramoz ◴[] No.43991636[source]
Karpathy had a similar interesting take the other day

https://x.com/karpathy/status/1921368644069765486

replies(2): >>43994547 #>>43997565 #
2. FisherKK ◴[] No.43994547[source]
Skill Library!
3. hnuser123456 ◴[] No.43997565[source]
I'm starting up experiments with having agents write system prompts for sub-agents. Specifically, have the LLM build, test, and validate a small, simple tool, and once validated, add it to its own system prompt listing available tools.

Anyone else experimenting with letting LLMs generate their own or sub-agent system prompts?