←back to thread

186 points ahlCVA | 8 comments | | HN request time: 0.205s | source | bottom
1. tremon ◴[] No.45307792[source]
"while sharing the underlying hardware resources"? At the risk of sounding too positive, my guess is that hell will freeze over before that will work reliably. Alternating access between the running kernels is probably the "easy" part (DMA and command queues solve a lot of this for free), but I'm thinking more of all the hardware that relies on state-keeping and serialization in the driver. There's no way that e.g. the average usb or bluetooth vendor has "multiple interleaved command sequences" in their test setup.

I think Linux will have to move to a microkernel architecture before this can work. Once you have separate "processes" for hardware drivers, running two userlands side-by-side should be a piece of cookie (at least compared to the earlier task of converting the rest of the kernel).

Will be interesting to see where this goes. I like the idea, but if I were to go in that direction, I would choose something like a Genode kernel to supervise multiple Linux kernels.

replies(3): >>45307924 #>>45307985 #>>45313771 #
2. elteto ◴[] No.45307924[source]
You just don't share certain devices, like Bluetooth. The "main" kernel will probably own the boot process and manage some devices exclusively. I think the real advantage is running certain applications isolated within a CPU subset, protected/contained behind a dedicated kernel. You don't have the slowdown of VMs, or have to fight against the isolation sieve that is docker.
replies(1): >>45309251 #
3. vlovich123 ◴[] No.45307985[source]
Is there anything that says that multiple kernels will be responsible for owning the drivers for HW? It could be that one kernel owns the hardware while the rest speak to the main kernel using a communication channel. That's also presumably why KHO is a thing because you have to hand over when shutting down the kernel responsible for managing the driver.
4. yjftsjthsd-h ◴[] No.45309251[source]
That's fine for

  - Enhanced security through kernel-level separation
  - Better resource utilization than traditional VM (KVM, Xen etc.)
but I don't think it works for

  - Improved fault isolation between different workloads
  - Potential zero-down kernel update with KHO (Kernel Hand Over)
since if the "main" kernel crashes or is supposed to get upgraded then you have to hand hardware back to it.
replies(2): >>45309440 #>>45311469 #
5. raron ◴[] No.45309440{3}[source]
> since if the "main" kernel crashes or is supposed to get upgraded then you have to hand hardware back to it.

Isn't that similar to starting up from hibernate to disk? Basically all of your peripherals are powered off and so probably can not keep their state.

Also you can actually stop a disk (member of a RAID device), remove the PCIe-SATA HBA card it is attached to, replace it with a different one, connect all back together without any user-space application noticing it.

replies(1): >>45311918 #
6. samus ◴[] No.45311469{3}[source]
The old kernel boots the new kernel, possibly in a "passive" mode, performs a few sanity checks of the new instance, hands over control, and finally shuts itself down.
7. ◴[] No.45311918{4}[source]
8. p_l ◴[] No.45313771[source]
This is something that was actually implemented and used on multiple platforms, and generally requires careful development of all interacting OSes. Some resources that have to be multiplexed are handled through IPC between running kernels, otherwise resources were set to be exclusively owned.

This allowed cheap "logical partitioning" of machines without actually using a hypervisor or special hardware support.