←back to thread

192 points ahlCVA | 1 comments | | HN request time: 0.406s | source
Show context
tremon ◴[] No.45307792[source]
"while sharing the underlying hardware resources"? At the risk of sounding too positive, my guess is that hell will freeze over before that will work reliably. Alternating access between the running kernels is probably the "easy" part (DMA and command queues solve a lot of this for free), but I'm thinking more of all the hardware that relies on state-keeping and serialization in the driver. There's no way that e.g. the average usb or bluetooth vendor has "multiple interleaved command sequences" in their test setup.

I think Linux will have to move to a microkernel architecture before this can work. Once you have separate "processes" for hardware drivers, running two userlands side-by-side should be a piece of cookie (at least compared to the earlier task of converting the rest of the kernel).

Will be interesting to see where this goes. I like the idea, but if I were to go in that direction, I would choose something like a Genode kernel to supervise multiple Linux kernels.

replies(3): >>45307924 #>>45307985 #>>45313771 #
1. p_l ◴[] No.45313771[source]
This is something that was actually implemented and used on multiple platforms, and generally requires careful development of all interacting OSes. Some resources that have to be multiplexed are handled through IPC between running kernels, otherwise resources were set to be exclusively owned.

This allowed cheap "logical partitioning" of machines without actually using a hypervisor or special hardware support.