←back to thread

120 points gbxk | 2 comments | | HN request time: 0.001s | source

I've built this to make it easy to host your own infra for lightweight VMs at large scale.

Intended for exec of AI-generated code, for CICD runners, or for off-chain AI DApps. Mainly to avoid Docker-in-Docker dangers and mess.

Super easy to use with CLI / Python SDK, friendly to AI engs who usually don't like to mess with VM orchestration and networking too much.

Defense-in-depth philosophy.

Would love to get feedback (and contributors: clear & exciting roadmap!), thx

Show context
mentalgear ◴[] No.45657697[source]
I would really like to see a good local sandboxing solution in this space, something that is truly local-first. This is especially important since many coding models / agentic builders will eventually become lightweight enough to run them on-device instead of having to buy tokens and share user data with big LLM cloud providers.
replies(7): >>45658204 #>>45658498 #>>45659517 #>>45661176 #>>45662480 #>>45662484 #>>45666374 #
1. mkagenius ◴[] No.45661176[source]
> something that is truly local-first

Hey, we built coderunner[1] exactly for this purpose. It's completely local. We use apple containers for this (which are 1:1 mapped to a lightweight VM).

1. Coderunner - https://github.com/instavm/coderunner

replies(1): >>45666715 #
2. gbxk ◴[] No.45666715[source]
Very cool! Apple containers run on Apple ARM so it's complimentary to my stack which doesn't support ARM yet (but soon will when extending to Qemu which supports ARM). Thanks for sharing!