←back to thread

237 points jdkee | 3 comments | | HN request time: 0s | source
1. calebhwin ◴[] No.45950375[source]
If I may make a suggestion, many problems folks face with MCP would be solved if their agents were JIT compiled, not ran in a static while loop.

We've been developing this in case folks are interested: https://github.com/stanford-mast/a1

replies(1): >>45951156 #
2. brouser ◴[] No.45951156[source]
Not sure what you are compiling and what static while loop is.
replies(1): >>45958173 #
3. cstrahan ◴[] No.45958173[source]
I just skimmed the README.

I believe the point is to do something akin to "promise pipelining":

https://capnproto.org/rpc.html

http://erights.org/elib/distrib/pipeline.html

When an MCP tool is used, all of the output is piped straight into the LLM's context. If another MCP tool is needed to aggregate/filter/transform/etc the previous output, the LLM has to try ("try" is a keyword -- LLMs are by their nature nondeterministic) and reproduce the needed bits as inputs into the next tool use. This increases latency dramatically and is an inefficient use of tokens.

This "a1" project, if I'm reading it correctly, allows for pipelining multiple consecutive tool uses without the LLM/agent being in the loop, until the very end when the final results are handed off to the LLM.

An alternative approach inspired by the same problems identified in MCP: https://blog.cloudflare.com/code-mode/