←back to thread

168 points selvan | 1 comments | | HN request time: 0.207s | source
Show context
thepoet ◴[] No.44462966[source]
We looked at Pyodide and WASM along with other options like firecracker for our need of multi-step tasks that require running LLM generated code locally via Ollama etc. with some form of isolation than running it directly on our dev machines and figured it would be too much work with the various external libraries we have to install. The idea was to get code generated by a powerful remote LLM for general purpose stuff like video editing via ffmpeg, beautiful graphs generation via JS + chromium and stuff and execute it locally with all dependencies being installed before execution.

We built CodeRunner (https://github.com/BandarLabs/coderunner) on top of Apple Containers recently and have been using it for sometime. This works fine but still needs some improvement to work across very arbitrary prompts.

replies(1): >>44463014 #
indigodaddy ◴[] No.44463014[source]
For the Gemini-cli integration, is the only difference between code runner with Gemini-cli, and gemini-cli itself, is that you are just using Gemini-cli in a container?
replies(1): >>44463067 #
thepoet ◴[] No.44463067[source]
No, Gemini-cli still is on your local machine, when it generates some code based on your prompt, with Coderunner, the code runs inside a container (which is inside a new lightweight VM courtesy Apple and provides VM level isolation), installs libraries requested, executes the generated code inside it and returns the result back to Gemini-cli.

This is also not Gemini-cli specific and you could use the sandbox with any of the popular LLMs or even with your local ones.

replies(2): >>44466643 #>>44482478 #
1. indigodaddy ◴[] No.44466643[source]
Thanks for explaining