[0.5]: https://en.wikipedia.org/wiki/Exokernel
[1.5]: https://wiki.osdev.org/Exokernel
[2.5]: "Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration." — Stan Kelly-Bootle
https://www.digiater.nl/openvms/doc/alpha-v8.3/83final/aa_re...
I think Linux will have to move to a microkernel architecture before this can work. Once you have separate "processes" for hardware drivers, running two userlands side-by-side should be a piece of cookie (at least compared to the earlier task of converting the rest of the kernel).
Will be interesting to see where this goes. I like the idea, but if I were to go in that direction, I would choose something like a Genode kernel to supervise multiple Linux kernels.
This sounds like running multiple kernels in a shared security domain, which reduces the performance cost of transitions and sharing, but you lose the reliability and security advantages that a proper VM gives you. It reminds me of coLinux (essentially, a Linux kernel as a Windows NT device driver)
Does anyone have more details on how OpenVMS Galaxy was actually implemented? I believe it was available for both Alpha and Itanium, but not yet x86-64 (and probably never…)
I think the architecture assumes all loaded kernels are trusted, and imposes no isolation other than having them running on different CPUs.
Given the (relative) simplicity of the PoC, it could be really performant.
- Enhanced security through kernel-level separation
- Better resource utilization than traditional VM (KVM, Xen etc.)
but I don't think it works for - Improved fault isolation between different workloads
- Potential zero-down kernel update with KHO (Kernel Hand Over)
since if the "main" kernel crashes or is supposed to get upgraded then you have to hand hardware back to it.Isn't that similar to starting up from hibernate to disk? Basically all of your peripherals are powered off and so probably can not keep their state.
Also you can actually stop a disk (member of a RAID device), remove the PCIe-SATA HBA card it is attached to, replace it with a different one, connect all back together without any user-space application noticing it.
Which of the kernel does the PCI enumeration, for instance, and how it is determined which kernel gets ownership over a PCI device? How about ACPI? Serial ports?
How does this architecture transfers ownership over RAM between each kernel, or is it a fixed configuration? How about NUMA-awareness? (Likely you would want to partition systems so that RAM is along with the CPUs of the same NUMA node).
Looks to me that one kernel would need to be have 'hypervisor'-like behavior in order to divvy up resources to other kernels. I think PVM (https://lwn.net/Articles/963718/) would be a preferred solution in this case, because the software stack of managing hypervisor resources can already be reused with it.