←back to thread

364 points Klasiaster | 6 comments | | HN request time: 1.281s | source | bottom
Show context
justmarc ◴[] No.41853080[source]
I'm interested in these kind of kernels to run very high performance network/IO specific services on bare metal, with minimal system complexity/overheads and hopefully better (potential) stability and security.

The big concern I have however is hardware support, specifically networking hardware.

I think a very interesting approach would be to boot the machine with a FreeBSD or Linux kernel, just for the purposes of hardware as well as network support, and use a sort of Rust OS/abstraction layer for the rest, bypassing or simply not using the originally booted kernel for all user land specific stuff.

replies(4): >>41853111 #>>41853348 #>>41853724 #>>41855929 #
1. nijave ◴[] No.41853724[source]
Couldn't you just boot the Linux kernel directly and launch a generic app as pid 1 instead of a full blown init system with a bunch of daemons?

That's basically what you're getting with Docker containers and a shared kernel. AWS Lambda is doing something similar with dedicated kernels with Firecracker VMs

replies(2): >>41853792 #>>41855864 #
2. mjevans ◴[] No.41853792[source]
Yes, you can. You can even have a different Pid 1 configure whatever and then replace it's core image with the new Pid 1.
3. justmarc ◴[] No.41855864[source]
Yes, but I wanted to bypass having the complexity of the Linux kernel completely, too.

Basically single app directly to network (the world) and as little as possible else in between.

replies(1): >>41860864 #
4. afr0ck ◴[] No.41860864[source]
Linux kernel is not complex. Most of the code runs lock-free. For example, the slab allocator in the kernel uses only a single double_cmpxhg instruction to allocate an object via kmalloc(). The algorithm scales to any number of CPUs and has NUMA awareness. Basically, the most concurrent, lowest allocation latency allocator you can get in the market, which also returns the best objects for the requesting process on big memory systems.

The complexity on the other hand is architectural and logical to achieve scale to hundreds of CPUs, maximise bandwidth and reduce latency as much as possible.

Any normal Rust kernel will either have issues scaling on multi-cores or use tax-heavy synchronisation primitives. The kernel RCU and lock-free algorithm took a long time to be discovered and become mature and optimised aggressively to cater for the complex modern computer architectures of out-of-order execution, pipelining, complex memory hierarchies (especially when it comes to caching) and NUMA.

replies(2): >>41864610 #>>41881359 #
5. sshine ◴[] No.41864610{3}[source]
A new kernel can never beat legacy kernels on hardware support.

To reach a useful state, you only need to be highly performant on a handful of currently popular server architectures.

> Any normal Rust kernel will either have issues scaling on multi-cores or use tax-heavy synchronisation primitives.

I'm not sure how that applies to Asterinas. Is Asterinas any normal Rust kernel?

https://asterinas.github.io/book/kernel/the-framekernel-arch...

6. ladyanita22 ◴[] No.41881359{3}[source]
> Any normal Rust kernel will either have issues scaling on multi-cores or use tax-heavy synchronisation primitives. The kernel RCU and lock-free algorithm took a long time to be discovered and become mature and optimised aggressively to cater for the complex modern computer architectures of out-of-order execution, pipelining, complex memory hierarchies (especially when it comes to caching) and NUMA.

Why would that be the case at all? What has Rust anything to do with that?