←back to thread

120 points gbxk | 1 comments | | HN request time: 0.214s | source

I've built this to make it easy to host your own infra for lightweight VMs at large scale.

Intended for exec of AI-generated code, for CICD runners, or for off-chain AI DApps. Mainly to avoid Docker-in-Docker dangers and mess.

Super easy to use with CLI / Python SDK, friendly to AI engs who usually don't like to mess with VM orchestration and networking too much.

Defense-in-depth philosophy.

Would love to get feedback (and contributors: clear & exciting roadmap!), thx

Show context
whalesalad ◴[] No.45660087[source]
From an outside perspective, this looks silly. Like fitting a square peg in a round hole. But I do ack "what if we could run vm's as easily as we run containers" use case and atm it seems like things like this (and katacontainers) are the only ways to do it. Wondering a few things: do all the layers of abstraction make things brittle and how is performance impacted?
replies(1): >>45660211 #
1. gbxk ◴[] No.45660211[source]
It uses Kata with Firecracker which gives you as light of a boot as it gets. Subsecond booting for instance is accessible with a lighter rootfs, which is also on the roadmap (one of the easiest items, actually). The k8s layer doesn't add overhead either compared to any other VM. If you want to compare to bare containers, depending on workload, you could see a 5% overhead due to virtualization. Exact overhead would depend on workload.