Most active commenters
  • londons_explore(4)
  • dilyevsky(3)
  • CompuIves(3)

169 points hunvreus | 74 comments | | HN request time: 1.506s | source | bottom
1. NitpickLawyer ◴[] No.43653452[source]
Cool article! The stack (and results) are impressive, but I also appreciate the article in itself, starting from basics and getting to the point in a clear and slowly expanding way. Easy to follow and appreciate.

On a bit of a tangent rant, this kind of writing is slowly going away, taken over by LLM slop (and I'm a huge fan of LLMs, just not the people who write those kinds of articles). I was recently looking for real world benchmarks for vllm/sglang deployments of DeepSeek3 on a 8x 96GB pod, to see if the model fits into the amount of RAM, with kv cache and context length, what numbers to people get, etc.

Of the ~20 articles that google surfaced on various attempts of keywords, none were what I was looking for. The excerpts seemed promising, some even offered tables & stuff related to ds3 and RAM usage, but all were LLM crap. All were written in that simple style - intro - bla bla - conclusion, some even had RAM requirements that made no sense (running a model trained in FP8 in 16bit, something noone would do, etc.)

replies(2): >>43653624 #>>43654145 #
2. fxtentacle ◴[] No.43653624[source]
While I fully agree with you on the absence of good benchmarks and the growing LLM slop ...

"running a model trained in FP8 in 16bit, something noone would do, etc"

I did that because on the RTX 3090 - which can be a good bang per buck for inference - the FP8 support is nerfed at the driver level. So a kernel that upscales FP8 to FP16 inside SRAM, then does the matmul, then downscales to FP8 again can bring massive performance benefits on those consumer cards.

BTW, you can run a good DeepSeek3 quant on a single H200.

replies(1): >>43653977 #
3. 1970-01-01 ◴[] No.43653731[source]
The VM vs container debate is fascinating. They are separate yet slowly merging concepts that are becoming more blurred as technology becomes cheaper and faster. If the real bottleneck to scale is adaptable code, then it is foolish to dismiss the VM as outdated tech when it can be completely rehomed in 2 seconds. That megabyte of python code managing your containers would still be busy checking it's dependencies in that same timeframe.
replies(2): >>43655040 #>>43662605 #
4. simonklitj ◴[] No.43653932[source]
Interesting read—thanks! One question: in the CoW example, if VM A modifies the data post-fork, what does VM B see when it later copies that data? Does it get the original data from the time of the fork, or VM A’s modified version?
replies(1): >>43655050 #
5. londons_explore ◴[] No.43653973[source]
Unmentioned: there are serious security issues with memory cloning code not designed for it.

For example, an SSL library might have pre-calculated the random nonce for the next incoming SSL connection.

If you clone the VM containing a process using that library, now both child VM's will use the same nonce. Some crypto is 100% broken open if a nonce is reused.

replies(7): >>43654026 #>>43654396 #>>43654513 #>>43654702 #>>43654894 #>>43655157 #>>43657321 #
6. NitpickLawyer ◴[] No.43653977{3}[source]
Thanks! I was looking at blackwell 6000PROs, 8x 96GB for running full fp8 (as it's supported and presumably fast).

I know AWQ should run, and be pretty snappy & efficient w/ the new MLA added, but wanted to check if fp8 fits as well, because from a simple napkin math it seems pretty tight (might only work for bs1, ctx_len <8k which would probably not be suited for coding tasks).

7. generalizations ◴[] No.43654026[source]
Sounds like it would simply be inappropriate to clone & use a VM that's assuming it's data is unique. This would also be true of other conditions, e.g. if you needed to spoof a MAC or IPv6 address & picked one randomly.
replies(1): >>43654077 #
8. londons_explore ◴[] No.43654077{3}[source]
The problem is modern software is so fiendishly complicated there almost certainly is stuff like that in the code. The question is where, and does it matter?
replies(1): >>43654228 #
9. thomasjudge ◴[] No.43654145[source]
You are describing this as a writing problem, but it sounds more like a search results/search engine problem
10. pragma_x ◴[] No.43654222[source]
I'm starting to see a pattern here. This describes a technology that rapidly deploys "VM" instances in the cloud which support things like Lambda and single-process containers. At what point do we scale this all back to a more rudimentary OS that provides security and process management across multiple physical machines? Or is there already a Linux distro that does this?

I ask because watching cloud providers like AWS slowly reinvent mainframes just seems like the painful way around.

replies(5): >>43654263 #>>43654884 #>>43655058 #>>43656759 #>>43662668 #
11. generalizations ◴[] No.43654228{4}[source]
And the last question is, can the parts with stuff like that be extracted from the rest and run separately?
12. no_wizard ◴[] No.43654263[source]
EDIT: leaving the answer, but I simply misinterpreted what they meant. This isn't the same thing

BSD has had jails for a long time, which let you achieve isolation on a system in this manner, or at least close to it.

replies(1): >>43654811 #
13. mystraline ◴[] No.43654294[source]
Different proposal:

Let's say we have 2 Linux machines. Identical hardware, identical libs.

I'd like to run a simple program on one machine, and then during mid-calculation, would like to transfer the running program to the other machine.

Is this doable?

replies(5): >>43654408 #>>43654455 #>>43654466 #>>43654749 #>>43655094 #
14. bravura ◴[] No.43654367[source]
This is an increasingly important area, with LLM generated code, and am curious about people's experiences with codesandbox vs e2b vs daytona
replies(1): >>43667816 #
15. hobofan ◴[] No.43654381[source]
[2022]
16. hypeatei ◴[] No.43654396[source]
> might have pre-calculated the random nonce

Isn't this still a concern even if you're not pre-calculating way ahead of time? If you generate it when needed, it could still catch you at the wrong time (e.g. right before encryption, but right after nonce generation)

replies(1): >>43654654 #
17. new_user_final ◴[] No.43654408[source]
Unrelated, but somewhat similar in higher level, you can transfer state with durable execution, e.g temporal.io.

Instead of RAM, program's state saved in DB and execution environment resume in the previous state when restarted

replies(1): >>43654969 #
18. dilyevsky ◴[] No.43654455[source]
Yes - using Criu[0] or docker checkpoint/restore api (which uses criu)

[0] -https://criu.org/Main_Page

19. panki27 ◴[] No.43654466[source]
Interesting thought, but highly dependant on the actual program. Let's assume it doesn't touch any files on disk (no opening sockets either). You would need to at least

1. Halt the process (SIGSTOP comes to mind)

2. Create a copy of the running program and /proc/$pid - which will also include memory and mmap details

3. Transfer everything to the other machine

4. Load memory, somehow spawn a spawn a new process with the info from /proc/$pid we saved, mmap the loaded memory into it

5. Continue the process on the new machine (SIGCONT)

Let me admit that I do not have the slightest clue how to achieve step 4. I wonder if a systemd namespace could make things easier.

20. sunshinekitty ◴[] No.43654513[source]
GCP’s ‘live migrations’ have been doing this for close to a decade or more. Must not be that big of a problem.
replies(2): >>43654524 #>>43657289 #
21. londons_explore ◴[] No.43654524{3}[source]
It isn't a problem if you guarantee only one child of the clone lives on - which GCP does.
replies(1): >>43654845 #
22. walrus01 ◴[] No.43654548[source]
> "Virtual machines are often seen as slow, expensive, bloated and outdated. "

By who, exactly? Citations needed

replies(2): >>43654687 #>>43654783 #
23. zamadatix ◴[] No.43654654{3}[source]
Unless your encryption and transport protocols are 100% stateless only 1 connection will actually be able to form, even if you duplicate the machine during connection creation.

The problem with pre-computing a bunch and keeping them in memory is brand new connections made post cloning would use the same list of nonces.

24. awestroke ◴[] No.43654687[source]
It's not an uncommon viewpoint. Especially if you compare them to containers.
25. hedora ◴[] No.43654702[source]
I was about to say you were being paranoid, then I read the article. It hadn’t occurred to me that anyone would be so reckless!

The proposed workflow involves cloning your dev environment and sharing it with the internet.

At most places, that’s equivalent to publishing your production keys, or at least github credentials.

Even for open source projects where confidentiality doesn’t matter, there are issues like using cargo/npm/etc keys to launch supply chain attacks.

Your nonce attack is harder to pull off, but more devastating if the attacker can man in the middle things like dependency downloads.

26. toast0 ◴[] No.43654749[source]
A search for 'linux process live migration' picks up at least one repo that claims to have done it, and a bunch of other interesrting things.

For a very simple program, with limited I/O, it's not too hard; especially if you don't mind a significant pause to move. Difficulty comes when you have FDs to migrate and if you need to reduce the pausing. If you need to keep FDs to the filesystem or the program will load/store to the filesystem periodically, you'd need to do a filesystem migration too... If you need to keep FDs for network sockets, you've got to transfer those somehow.

If it's just stdin/out/err, you could probably do the migration in userspace with some difficulty if you need to keep pid constant (but maybe you don't need that either).

Minimal pausing involves letting the program run on the initial machine while you copy memory, setting pages to read-only so you can catch writes, and only pausing the program once the copy is substantially finished. Then you pause execution on the initial machine. If there's a significant amount of modified pages to copy over when you pause, you can still start execution on the new machine, as long as the modified pages are marked unavailable, if you background copy them before they're used great... if not, you have to block until the modified data comes through.

Probably you do this on two nearby machines with fast networking, and the program doesn't have a lot of writes all over memory, so the pause should be short.

replies(2): >>43655532 #>>43656452 #
27. sofixa ◴[] No.43654783[source]
By almost everyone who has done a comparison.

VMs have a full OS that needs to be maintained (patched, upgraded when EOL, etc.).

Hypervisors traditionally cost a metric crapton of money per core. Yes, Proxmox is pretty good, but it's the exception, not the norm. They're also relatively slow in spinning up new VMs (kind of by definition, it takes a lot of time to emulate a full blown replica of hardware vs just starting a process in a cgroup/jail).

And most of all, VMs are just solving the wrong problem. You don't care about emulating hardware, you care about running some workload. Maybe it needs specific hardware or a virtual version of it, but more likely than not, it's a regular batch processor or API that can happily run in a container with almost none of the overhead of a full VM.

replies(1): >>43658486 #
28. hedora ◴[] No.43654811{3}[source]
They’re missing multi-machine orchestration: Run thousands of jails on these dozen machines. Don’t bother me with the details at runtime.

They are also missing an ergonomic tool like dockerfiles. The following file, plus a cli tool for “run N copies on my M machines” should be enough to run bsd in prod, and it is not:

“FROM openbsd:latest ; CMD pkg -i apache ; echo “apache=enabled >> /etc/rc.defaults ; COPY public_html /var/www/ ; CMD init”

I don’t think writing the tooling would be that difficult, but it was missing the last time I looked.

replies(1): >>43655175 #
29. matt-p ◴[] No.43654845{4}[source]
How do we know that isn't enforced here too?
replies(1): >>43655491 #
30. jerf ◴[] No.43654884[source]
We've been cycling around that wheel for a while.

If there's any difference now versus the past, it is that I think right now pretty much every point on the wheel is available quite readily now. If you want a more "rudimentary OS" you don't need to wait for the next turning of the wheel, it's here now. Need full VMs? Still a practical technology. Containers enough? Actively in development and use. Mix & match? Around any sensible combination you can do it now. And so on.

31. perching_aix ◴[] No.43654894[source]
I don't really follow, what's the issue with that? The two nodes will encrypt using the same key, so they can snoop at each other's traffic that they send out? Doesn't sound that big of a deal per se.
replies(2): >>43655173 #>>43655673 #
32. comprev ◴[] No.43654924[source]
Needs [2022] in the title
33. WJW ◴[] No.43654969{3}[source]
How does such a method retain things like open network connections that have significant kernel state involved as well?
replies(1): >>43655518 #
34. CompuIves ◴[] No.43655010[source]
Oh wow! Unexpected and cool to see this post on Hacker News! Since then we have evolved our VM infra a bit, and I've written two more posts about this.

First, we started cloning VMs using userfaultfd, which allows us to bypass the disk and let children read memory directly from parent VMs [1].

And we also moved to saving memory snapshots compressed. To keep VM boots fast, we need to decompress on the fly as VMs read from the snapshot, so we chunk up snapshots in 4kb-8kb pieces that are zstd compressed [2].

Happy to answer any questions here!

[1]: https://codesandbox.io/blog/cloning-microvms-using-userfault...

[2]: https://codesandbox.io/blog/how-we-scale-our-microvm-infrast...

replies(2): >>43658063 #>>43659379 #
35. znpy ◴[] No.43655040[source]
In a way it's nothing new. MOSIX/openMosix (https://en.wikipedia.org/wiki/MOSIX, https://en.wikipedia.org/wiki/OpenMosix) did similar stuff with individual processes. It would probably be even faster as then you would have to only move process memory and its state rather than the whole VM memory (and it state).

I guess it would/could be nice to have something that moves kubernetes pods around rather killing them and starting new ones.

36. CompuIves ◴[] No.43655050[source]
I talk a bit about this here: https://codesandbox.io/blog/cloning-microvms-using-userfault.... Before VM A updates its data, the data is copied over to VM B if VM B hadn't written/read that data yet.
replies(1): >>43655267 #
37. zer00eyz ◴[] No.43655058[source]
> I ask because watching cloud providers like AWS slowly reinvent mainframes just seems like the painful way around.

When AWS was the hot new thing in town a server was coming in at 12/24 threads.

A modern AMD machine tops out at 700+ threads and 400gb QSFP interconnects. GO back to 2000 and the Dotcom boom and thats a whole mid sized company, in a 2u rack.

Finding single applications that can leverage all that horsepower is going to be a challenge... and thats before you layer in lift for redundancy.

Strip away all the bloat, all the fine examples of Conways law that organizations drag around (or inherit from other orgs) and compute is at a place where it's effectively free... With the real limits/costs being power and data (and these are driven by density).

38. tryauuum ◴[] No.43655094[source]
if you put you program in a qemu/kvm VM then it just works

I was completely blown away when I first experienced it. My code running in a VM never even noticed any downtime. All the network connections are preserved and so on.

39. nimbius ◴[] No.43655132[source]
>Virtual machines are often seen as slow, expensive, bloated and outdated.

by whom?

I tend to loathe firecracker posts because theyre all just thinly veiled ads for Amazon services.

Firecracker is not included in the standard linux KVM/QEMU duo and has sparse documentation. you cannot deploy a firecracker image like a traditional VM. in fact there are no tools to assist in creating a firecracker VM, and the filesystem for the VM must be EXT4.

TL;DR: this is all fun stuff if youre 200% cloud, but most companies run a ton of on-prem vms as well.

replies(1): >>43655346 #
40. CompuIves ◴[] No.43655157[source]
Yes, that's right. The Firecracker team has written a fantastic doc about this as well: https://github.com/firecracker-microvm/firecracker/blob/main....

It's important to refresh entropy immediately after clone. Still, there can be code that didn't assume it could be cloned (even though there's always been `fork`, of course). Because of this, we don't live clone across workspaces for unlisted/private sandboxes and limit the use case to dev envs where no secrets are stored.

41. Rygian ◴[] No.43655173{3}[source]
A nonce is not a key, it's a piece of random that is meant to be used at most once.

If an attacker sees valid nonces on a VM, and knows of another VM sharing the same nonces, then your crypto on both* VMs becomes vulnerable to replay attacks.

*read: all

replies(2): >>43655417 #>>43656303 #
42. no_wizard ◴[] No.43655175{4}[source]
I think I may have simply misinterpreted what you meant. You're right, its not Dockerfile-esque easy
43. simonklitj ◴[] No.43655267{3}[source]
clever! Thank you.
44. hhh ◴[] No.43655346[source]
I was using ignite for a while to create firecracker vms, i think it’s called flintlock now. Ignite worked great when I was using it.
45. nodesocket ◴[] No.43655402[source]
Has anybody tried running ollama and Open WebUI in firecracker instead of full VMs? I assume this should work, but not sure about GPU (single and multi) passthrough.
replies(1): >>43667825 #
46. nodesocket ◴[] No.43655417{4}[source]
How would a reply attack work in production assuming multiple VMs share a nonce?
replies(1): >>43655794 #
47. jsnell ◴[] No.43655491{5}[source]
Because their main selling point is to run the copies concurrently with the original.
48. dilyevsky ◴[] No.43655518{4}[source]
it does not. all the state that you need to make "durable" needs to be explicitly committed in temporal via their sdk
49. dilyevsky ◴[] No.43655532{3}[source]
If you're talking about Criu then it's not just a claim it actually does work well in production. I know Google was using it in prod on their internal systems and probably many others. It even can migrate TCP connections for you via socket repair api in Linux
50. londons_explore ◴[] No.43655673{3}[source]
Reusing a nonce often allows the entire world to decrypt or MITM the data.
51. saagarjha ◴[] No.43655794{5}[source]
You record the traffic going to one VM and send it to another, which will now accept it because the nonce is the same.
52. trollied ◴[] No.43656303{4}[source]
“Number ONCE”. NONCE. Indeed.
53. wang_li ◴[] No.43656452{3}[source]
>...keep FDs for network sockets, you've got to transfer those somehow.

And if you have any shared memory segments, semaphores, or message queues, you have to drag along a bunch of other processes.

54. phgn ◴[] No.43656578[source]
(2022)
55. robszumski ◴[] No.43656759[source]
We were working on this at CoreOS before Kubernetes came about – called fleet https://github.com/coreos/fleet. Distributed systemd units run across a cluster, typically running containers or golang binaries with a super minimal OS underneath. I always thought it was cool but it definitely had its challenges and Kubernetes is better in most ways, IMO.
56. Imustaskforhelp ◴[] No.43657082[source]
In the minecraft example video

We are shown a person who quit the server and then the server stops and restarts (that 2 second clone of vm)

but what if I have a service like lets say normal minecraft servers like hypixel or others, they can't hope for a 2 second delay. Maybe we would have to use proxies in that case.

I am genuinely interested by this tech.

Currently, I am much in favour of tinykvm and its snapshotting because its even lighter than firecracker(I think). I really like the dev behind tinykvm as well.

57. oceanplexian ◴[] No.43657289{3}[source]
Live Migration on VMWare has been a thing before Google even had a cloud service.
replies(1): >>43657602 #
58. dietr1ch ◴[] No.43657321[source]
A neat use case for cloning is not truly duplicating a machine, but moving it from one machine that will go off to another one.

There's caveats in the network though, as packets targeting the old address need to be re-routed until all connections target the new machine.

59. tanelpoder ◴[] No.43657602{4}[source]
VMware even has a vSphere Fault Tolerance product that creates a "live shadow instance" of a VM that mirrors the primary virtual machine (with up to 4 vCPUs). So you can do a quick failover in case of an "immediate planned" failover case, but apparently even when the primary DB goes down. I guess this might work when some external system (like a storage array) goes down in the primary, you can just switch to the other VM (with latest memory/CPU state) and replay that I/O there and keep going... But if there's a hard crash of the primary, if it actually does work, then they must be doing lots of reasoning about internal state change ordering & external device side-effect (somewhat like Antithesis, but for a different purpose). Back in the day, they supported only uniprocessor VMs (with something called vLockstep) and later up to 4 vCPUs with something called Fast Checkpointing.

I've always wanted to test this out for fun, by now 15 years have gone by and I've never got to it...

https://www.vmware.com/products/cloud-infrastructure/vsphere...

replies(1): >>43657915 #
60. manish_gill ◴[] No.43657702[source]
They tried running minecraft, but I wonder if a similar (or better) cloning is possible for a mission critical workload - like a database consuming a huge amount of memory. Neon uses QEMU to achieve this for example: https://neon.tech/docs/reference/glossary#live-migration but is that the only way?
61. umachin ◴[] No.43657915{5}[source]
VMware has also had a patent on live VM cloning (called it VMfork) for quite a few years now. I worked on the team that built related features. Feature itself was in the desktop product. https://blogs.vmware.com/euc/2016/02/horizon-7-view-instant-...

Live migration had some very cool demos. They would have an intensive workload such as a game playing and cause a crash and the VM would resume with 0 buffering.

62. dang ◴[] No.43657946[source]
Related:

We clone a running VM in 2 seconds - https://news.ycombinator.com/item?id=38651805 - Dec 2023 (10 comments)

63. pulkitsh1234 ◴[] No.43658063[source]
Enjoyed reading all these and learnt a lot! Thanks for taking the time out to write the blogs!
64. mschuster91 ◴[] No.43658299[source]
> How to handle network and IP duplicates on cloned VMs

That is indeed what I would love to read the most! Because no matter what you do, it gets complex - if you tear down the network stack of the "old" VM, applications (like Minecraft) might be heading down into unstable territory when the listener socket disappears and the "new" VM has to go through the entire DHCP flow that may easily take a second or more, and if you just do the equivalent of S3 sleep (suspend to RAM), the first "new" VM will have everything working as expected but any further VM being spawned from the template will run into duplicate IP/MAC address usage.

65. tryauuum ◴[] No.43658486{3}[source]
while you are correct in calling the VM startup slow compared to the container startup, reading "emulating hardware" burns my eyes

modern VMs don't emulate hardware. When a VM has a hard drive or a network device there's no sophisticated code to trick VM into believing that this is real hardware. Virtio drivers are about VM writing data in a memory area and assuming it's written to the disk / sent to the network (because in the background hypervisor reads the same memory area and does the job)

replies(1): >>43658532 #
66. sofixa ◴[] No.43658532{4}[source]
> modern VMs don't emulate hardware

They provide pretend hardware which isn't really necessary.

67. zekrioca ◴[] No.43659379[source]
The example code present in the link is not available. Would you know where it went? Thanks, great article!
68. lofaszvanitt ◴[] No.43659761[source]
What problem is this supposed to solve?
69. jiggawatts ◴[] No.43662605[source]
IMHO the real advancement was the implicit snapshot Docker takes after every line in a Dockerfile.

Virtual Machine builder scripts like Packer could do this… but don’t.

It’s a choice, not an inherent technology limitation.

replies(1): >>43674956 #
70. pabs3 ◴[] No.43662668[source]
There was a multi-machine single-Linux-kernel-instance distro many years ago called Kerrighed. The company behind it died unfortunately so it hasn't kept up with Linux kernel patch rebasing. It offered a "view of a unique SMP machine on top of a cluster of standard PCs".

https://en.wikipedia.org/wiki/Kerrighed https://sourceforge.net/projects/kerrighed/

71. nkko ◴[] No.43667816[source]
Check this out: Spinning 100 Agents in Daytona https://youtu.be/OFFFyfgO2ik we should also soon release open source speed benchmark
72. nkko ◴[] No.43667825[source]
As far as I understand Firecracker, you can't do a GPU passthrough
73. dontlaugh ◴[] No.43674956{3}[source]
I have the opposite opinion, the implicit overlay filesystem in Docker is an unnecessary and frustrating complication.