←back to thread

408 points ksec | 1 comments | | HN request time: 0.434s | source
Show context
tampueroc ◴[] No.45230925[source]
Slightly related and coming from ignorance here, but what is the general intuition for the pros and cons of a microkernel approach in OS development?
replies(2): >>45231431 #>>45231610 #
mike_hearn ◴[] No.45231610[source]
Every modern commercial OS is a hybrid architecture these days. Generally subsystems move out of the kernel when performance testing shows the cost isn't too high and there's time/money to do so. Very little moves back in, but it does happen sometimes (e.g. kernel TLS acceleration).

There's not much to say about it because there's never been an actual disagreement in philosophy. Every OS designer knows it's better for stability and development velocity to have code run in userspace and they always did. The word microkernel came from academia, a place where you can get papers published by finding an idea, giving it a name and then taking it to an extreme. So most microkernels trace their lineage back to Mach or similar, but the core ideas of using "servers" linked by some decent RPC system can be found in most every OS. It's only a question of how far you push the concept.

As hardware got faster, one of the ways OS designers used it was to move code out of the kernel. In the 90s Microsoft obtained competitive advantage by having the GUI system run in the kernel, eventually they moved it out into a userland server. Apple nowadays has a lot of filing systems run in userspace but not the core APFS that's used for most stuff, which is still in-kernel. Android moved a lot of stuff out of the kernel with time too. It has to be taken on a case by case basis.

replies(2): >>45232116 #>>45233441 #
hollerith ◴[] No.45232116[source]
Can you explain why TTY-PTY functionality hasn't been moved from the Linux kernel to userspace? Plan 9 did so in the 1990s or earlier (i.e., when Plan 9 was created, they initially put the functionality in userspace and left it there.)

I don't understand that, and I also don't understand why users who enjoy text-only interaction with computers are still relying on very old designs incorporating things like "line discipline", ANSI control sequences and TERMINFO databases. A large chunk of cruft was introduced for performance reasons in the 1970s and even the 1960s, but the performance demands of writing a grid of text to a screen are very easily handled by modern hardware, and I don't understand why the cruft hasn't been replaced with something simpler.

In other words, why do users who enjoy text-only interaction with computers still emulate hardware (namely, dedicated terminals) designed in the 1960s and 1970s that mostly just displays a rectangular grid of monospaced text and consequently would be easy to implement afresh using modern techniques?

There a bunch of complexity in every terminal emulator for example for doing cursor-addressing. Network speeds are fast enough these days (and RAM is cheap enough) that cursor-addressing is unnecessary: every update can just re-send the entire grid of text to be shown to the user.

Also, I think the protocol used in communication between the terminal and the computer is stateful for no reason that remains valid nowadays.

replies(3): >>45233584 #>>45234006 #>>45235605 #
1. whitten ◴[] No.45233584[source]
I think the fact that the line protocol for DEC VT terminals is as the ANSI X3.64 standard is why the issue hasn’t been addressed or modernized

See https://en.m.wikipedia.org/wiki/ANSI_escape_code