Most active commenters
  • gbxk(24)
  • (4)

120 points gbxk | 55 comments | | HN request time: 1.564s | source | bottom

I've built this to make it easy to host your own infra for lightweight VMs at large scale.

Intended for exec of AI-generated code, for CICD runners, or for off-chain AI DApps. Mainly to avoid Docker-in-Docker dangers and mess.

Super easy to use with CLI / Python SDK, friendly to AI engs who usually don't like to mess with VM orchestration and networking too much.

Defense-in-depth philosophy.

Would love to get feedback (and contributors: clear & exciting roadmap!), thx

1. ◴[] No.45656953[source]
2. mentalgear ◴[] No.45657697[source]
I would really like to see a good local sandboxing solution in this space, something that is truly local-first. This is especially important since many coding models / agentic builders will eventually become lightweight enough to run them on-device instead of having to buy tokens and share user data with big LLM cloud providers.
replies(7): >>45658204 #>>45658498 #>>45659517 #>>45661176 #>>45662480 #>>45662484 #>>45666374 #
3. ◴[] No.45658154[source]
4. dloss ◴[] No.45658204[source]
Anthropic recently released a sandboxing tool based on bubblewrap (Linux, quite lightweight) and sandbox-exec (macOS). https://github.com/anthropic-experimental/sandbox-runtime

I wonder if nsjails or gVisor may be useful as well. Here's a more comprehensive list of sandboxing solutions: https://github.com/restyler/awesome-sandbox

replies(1): >>45658509 #
5. alexeldeib ◴[] No.45658355[source]
as someone in the space this ticks a lot of boxes: kubernetes-native, strong isolation, python sdk (ideal for ML scenarios). devmapper is a nice ootb approach.

Glancing at the readme, is your business model technical support? Or what's your plan with this?

Anything interesting to share around startup time for large artifacts, scaling, passing through persistent storage (or GPUs) to these sandboxes?

Curious what things like 'Multi-node cluster capabilities for distributed workloads' mean exactly? inter-VM networking?

replies(1): >>45658623 #
6. gbxk ◴[] No.45658498[source]
(sorry I didn't reply in-thread, I'm new to HN, re-posting response here:)

Exactly! The main local requirement is to have hardware virtualization available (e.g. /dev/kvm) but that should be fine on your local linux machine. Won't work in cloud machines or on Mac ARM in current form but maybe if I extend

replies(1): >>45658539 #
7. gbxk ◴[] No.45658509{3}[source]
wow that's super new! Thanks for that, will look deeply into it and compare
8. ofrzeta ◴[] No.45658539{3}[source]
There are some providers that offer KVM nested virtualization, I think Google Cloud, Digital Ocean ... any others?
replies(1): >>45658639 #
9. gbxk ◴[] No.45658623[source]
No business model short-term. My goal is broad adoption, 100% open-source.

By multi-node I mean so far I only support 1 k8s node, i.e. 1 machine, but soon adding support for multiple. Still, on 20 CPUs I can run +50 VM pods with fractional vCPU limits.

For GPU passthrough: not possible today because I use Firecracker as VMM. On roadmap: Add support for Qemu, then GPU passthrough possible.

Inter-VM networking: it's already possible on single-node: 1 VM = 1 pod. Can have multiple pods per node (have a look at utils/stress-test.sh). Right now I default deny-all ingress for safety (because by default k8s allows inter pod communication), but can make ingress configurable.

Startup time: a second, or a few seconds, depending on which base image (alpine, ubuntu, etc...) and whether you use a before_script or not (what I execute before the network lockdown)

Large artifacts: you can configure resource allocated to a VM pod in the sandbox config and it basically uses k8s resource limits.

Let me know if any other question! Happy to help

replies(1): >>45659219 #
10. gbxk ◴[] No.45658639{4}[source]
True! GCP does. I haven't tested it yet. I didn't know D.O does. If anyone knows others, I'm interested too!
replies(1): >>45661910 #
11. empath75 ◴[] No.45659109[source]
This seems like an amazing stack that ticks a lot of boxes for me, but I really dislike cli or a custom api as the UX for this and would prefer to manage all of this with CRDs so i can just use the k8s client for everything.
replies(2): >>45659145 #>>45659189 #
12. gbxk ◴[] No.45659145[source]
Actually you can! After you run "k7 install" you'll have a k3s cluster up and running, with Kata as a runtime class, and Firecracker specified in Kata config. So nothing prevents you from hitting the Kubernetes API! kubectl will work.

Note: I use k3s' internal kubectl and containerd, to avoid messing with your own if you have some already installed. That means you can run commands like "k3s kubectl ..."

And thank you for the compliments on the stack.

replies(1): >>45659183 #
13. ◴[] No.45659183{3}[source]
14. gbxk ◴[] No.45659189[source]
If you have any suggestion on how I can make this more friendly UX-wise to your personal usage, I am most interested to hear! And this will shape my roadmap.
15. yjftsjthsd-h ◴[] No.45659219{3}[source]
> No business model short-term. My goal is broad adoption, 100% open-source.

IMHO that's kind of a red flag. There's a happy path here where it's successful but stays low-maintenance enough that you just work on it in your spare time, or it takes of and gets community support, or you get sponsorships or such. But there's also an option where in a year or two it becomes your job and you decide to monetize by rug-pulling and announce that actually paying the bills is more important than staying 100% open source. Not a dig at you, just something that's happened enough times that I get nervous when people don't have a plan and therefore don't have a plan to avoid the outcome that creates problems for users.

replies(1): >>45659393 #
16. gbxk ◴[] No.45659393{4}[source]
Sure one day if it really kicks off I could think of offering additionally a SaaS solution with paid enterprise features like SOC 2 compliance, RBAC, multiple clouds supported, etc. Why not. But I strongly believe that for it to be successful, it needs a strong open-source base. Then, billing huge companies for compliance features or huge usage makes sense. That would support development of the open-source part too.

I like the Docker model, for instance: free for companies under 250 employees or $10m/y revenue.

In any case, it will always be open-source.

Those paid enterprise features wouldn't come from closed-source: they would come from compliance of a particular SaaS-offered infra setup, that anybody else could reproduce. Just like HuggingFace.

17. elric ◴[] No.45659517[source]
Are there any such solutions that can adequately protect against side-channel attacks (à la rowhammer, meltdown, spectre, ...)? I mean protecting local file access and network access is pretty easy, but side-channels and VM escaping attacks seem like a bigger concern.
replies(2): >>45659610 #>>45659728 #
18. ed_mercer ◴[] No.45659530[source]
Why do I need this if I already have containers and k8s for running agents?
replies(1): >>45659581 #
19. gbxk ◴[] No.45659581[source]
It is well known that containers do not provide you safe isolation. It is not their purpose. They share kernel and page cache with the host. Any kernel exploit gives to someone in a container potential root control of the host (see DirtyPipe, DirtyCow). That's why you need VM-level isolation.
replies(2): >>45659754 #>>45662856 #
20. gbxk ◴[] No.45659610{3}[source]
That's an interesting direction! TEE support would be relatively straightforward with current stack (and it's on my roadmap), so that could be a first step forward.
21. ATechGuy ◴[] No.45659728{3}[source]
Side-channel attacks apply to multi-tenant cloud environments, not local.
replies(1): >>45660139 #
22. ◴[] No.45659754{3}[source]
23. whalesalad ◴[] No.45660087[source]
From an outside perspective, this looks silly. Like fitting a square peg in a round hole. But I do ack "what if we could run vm's as easily as we run containers" use case and atm it seems like things like this (and katacontainers) are the only ways to do it. Wondering a few things: do all the layers of abstraction make things brittle and how is performance impacted?
replies(1): >>45660211 #
24. elric ◴[] No.45660139{4}[source]
That seems like a naive take. If any of your local VMs are internet connected and are compromised, side channel attacks could be used to exfiltrate data from other VMs or the host.
replies(1): >>45660552 #
25. gbxk ◴[] No.45660211[source]
It uses Kata with Firecracker which gives you as light of a boot as it gets. Subsecond booting for instance is accessible with a lighter rootfs, which is also on the roadmap (one of the easiest items, actually). The k8s layer doesn't add overhead either compared to any other VM. If you want to compare to bare containers, depending on workload, you could see a 5% overhead due to virtualization. Exact overhead would depend on workload.
26. ATechGuy ◴[] No.45660552{5}[source]
Then why only apply to VMs, why not apps?
27. gbxk ◴[] No.45660617[source]
Thanks everyone for the amazing feedback and discussion!

For anyone curious:

– Docs: https://docs.katakate.org

- LangChain Agent tutorial: https://docs.katakate.org/guides/langchain-agent

It's getting late where I am, so I'm heading to bed — looking forward to replying to any new comments tomorrow!

28. Bnjoroge ◴[] No.45660670[source]
Great project. There's been multiple approaches/tools in the space(top of my head I can think of e2b, arrakis, claude's new tool). how is this different?
replies(1): >>45666827 #
29. mkagenius ◴[] No.45661176[source]
> something that is truly local-first

Hey, we built coderunner[1] exactly for this purpose. It's completely local. We use apple containers for this (which are 1:1 mapped to a lightweight VM).

1. Coderunner - https://github.com/instavm/coderunner

replies(1): >>45666715 #
30. srcreigh ◴[] No.45661487[source]

    name: project-build
    image: alpine:latest
    namespace: default
    egress_whitelist:
      - "1.1.1.1/32"      # Cloudflare DNS
      - "8.8.8.8/32"      # Google DNS
This is basically a wide-open network policy as far as data exfiltration goes, right?

Malicious code just has to resolve <secret>.evil.com and Google/CF will forward that query to evil resolver.

replies(1): >>45661641 #
31. gbxk ◴[] No.45661641[source]
That's a config example.

Yes, blocking DNS exfiltration requires DNS filtering at cluster level. This is what will be added with the Cilium integration which is top-3 on the roadmap (top of readme).

DNS resolution is required for basic Kubernetes functionality and hostname resolution within the cluster.

That's said explicitly in several places in the docs: "DNS to CoreDNS allowed"

One thing I could do is make it exposed in config, to allow the user to block all DNS resolutions until Cilium is integrated. LMK if desired!

replies(1): >>45662690 #
32. eyberg ◴[] No.45661910{5}[source]
We (NanoVMs) can run (both unikernel and normal linux) virtualized workloads on plain old ec2 instances (eg: t2.small).
replies(1): >>45666616 #
33. bigwheels ◴[] No.45661981[source]
Is this basically an open-source DIY version of E2B?

If so, cool! AFAICT E2B is open-source licensed but tricky to setup.

replies(1): >>45662039 #
34. ushakov ◴[] No.45662039[source]
hey, I work at E2B, anything we can do to improve the setup for you?
replies(1): >>45662730 #
35. ushakov ◴[] No.45662155[source]
what does Katakana add on top of Kata?
replies(1): >>45666579 #
36. _false ◴[] No.45662480[source]
What about this: https://github.com/apple/container
replies(1): >>45666610 #
37. sshine ◴[] No.45662484[source]
https://rstrict.cloud/ is a CLI built in Rust on top of the Landlock API for the Linux kernel.

It lets you narrow the permission scope of an executable using simple command line wrappers.

replies(1): >>45666618 #
38. srcreigh ◴[] No.45662690{3}[source]
> One thing I could do is make it exposed in config, to allow the user to block all DNS resolutions until Cilium is integrated. LMK if desired!

Yes, but it's not great for it to be an optional config option. Trivially easy to use data exfiltration methods shouldn't be possible at all in a tool like this, let alone enabled by default.

I want to recommend ppl to try this out and not have to tell them about the 5 different options they need to configure in order for it to actually be safe. It ends up defeating the purpose of the tool in my opinion.

Some use cases will require mitmproxy whitelists as well, eg default deny pulling container image except matching the container whitelist.

replies(1): >>45666712 #
39. bigwheels ◴[] No.45662730{3}[source]
I dig E2B, it's a great service and very cost effective. Thanks for all your hard work!
40. innanet-worker ◴[] No.45662856{3}[source]
today i'm one of the lucky 10k https://xkcd.com/1053/
replies(1): >>45666581 #
41. re_spond ◴[] No.45665429[source]
Looks like an interesting project. Do you have any comments on how it is different from running gVisor?
replies(1): >>45666785 #
42. ygouzerh ◴[] No.45666030[source]
Nice, seems way cheaper to use than the new Cloudflare Sandbox SDK solution
replies(1): >>45666597 #
43. kernc ◴[] No.45666374[source]
Local-first (on Lunix), POSIX shell: https://github.com/sandbox-utils/sandbox-run
replies(1): >>45666721 #
44. gbxk ◴[] No.45666579[source]
Katakate is built on top of Kata, and sets up a stack combining Kubernetes (K3s), Kata, Firecracker, and devmapper snapshotter for thin pool provisioning. Combining these tools together is highly non-trivial and can be a headache for many, especially for AI engs who are often more comfortable with Python workflows. The stack gets deployed with an Ansible playbook. It implements a CLI, API and Python SDK to make it super easy to use. A lot of defense in depth settings are also baked-in so that you don't need to understand those systems at a low level to get a secure setup.
45. gbxk ◴[] No.45666581{4}[source]
Lucky you! And lucky me for sharing the info :)
46. gbxk ◴[] No.45666597[source]
Thanks, I'll review that one too and compare.
47. gbxk ◴[] No.45666610{3}[source]
Very cool one. That's dedicated to Apple ARM which I don't currently support so the two are complimentary. Apple containers shares some primitives with Kata. I'll investigate if it's possible to use Apple containers as a VMM inside Kata, or creating an Apple Containers runtime class in Kubernetes. If either is possible, we could then potentially use Apple containers as a backend in Katakate. I need more time to study that.
48. gbxk ◴[] No.45666616{6}[source]
Interesting, thanks for sharing!
49. gbxk ◴[] No.45666618{3}[source]
Thanks, will study that one too!
50. gbxk ◴[] No.45666712{4}[source]
This is an excellent point. I moved this to #1 on the TODO list. I'll deny all DNS resolution by default until Cilium is integrated, if that passes the basic functionality tests.

I'll also add to the roadmap whilelist/deny for container pulling.

Thanks!

51. gbxk ◴[] No.45666715{3}[source]
Very cool! Apple containers run on Apple ARM so it's complimentary to my stack which doesn't support ARM yet (but soon will when extending to Qemu which supports ARM). Thanks for sharing!
52. gbxk ◴[] No.45666721{3}[source]
Thanks for sharing, adding it to my list.
53. gbxk ◴[] No.45666785[source]
Thanks! Yes: Katakate provides much stronger isolation, since it uses hardware virtualization (via Kata Containers and Firecracker) while gVisor relies purely on software sandboxing in user space.

gVisor isolates containers by intercepting system calls in a user-space kernel, so it can still be vulnerable to sandbox escape via gVisor bugs, though not directly through Linux kernel exploits (since gVisor doesn’t expose the host kernel to the container).

Katakate also provides more than isolation: it offers orchestration through Kubernetes (K3s)

You could create a gVisor RuntimeClass in Kubernetes to orchestrate gVisor sandboxes, but that would require extra setup.

54. gbxk ◴[] No.45666827[source]
Thanks! I'll review Arrakis and come back. E2B is often considered harder to setup and less AI engineers friendly for direct stack contributions, as Katakate is the only alternative fully implemented in Python (core modules, Typer CLI, FastAPI, Python SDK).

Our native K8s support and exposition of K8s API also makes it friendly to devops.

Finally, our deploy/infra stack is lean and tightly fits in a single Ansible playbook, which makes it easy to understand and contribute to, letting you rapidly gain full understanding and ownership of the stack.