←back to thread

611 points LorenDB | 2 comments | | HN request time: 0s | source
Show context
dvratil ◴[] No.43908097[source]
The one thing that sold me on Rust (going from C++) was that there is a single way errors are propagated: the Result type. No need to bother with exceptions, functions returning bool, functions returning 0 on success, functions returning 0 on error, functions returning -1 on error, functions returning negative errno on error, functions taking optional pointer to bool to indicate error (optionally), functions taking reference to std::error_code to set an error (and having an overload with the same name that throws an exception on error if you forget to pass the std::error_code)...I understand there's 30 years of history, but it still is annoying, that even the standard library is not consistent (or striving for consistency).

Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".

replies(24): >>43908133 #>>43908158 #>>43908212 #>>43908219 #>>43908294 #>>43908381 #>>43908419 #>>43908540 #>>43908623 #>>43908682 #>>43908981 #>>43909007 #>>43909117 #>>43909521 #>>43910388 #>>43912855 #>>43912904 #>>43913484 #>>43913794 #>>43914062 #>>43914514 #>>43917029 #>>43922951 #>>43924618 #
zozbot234 ◴[] No.43908381[source]
> The one thing that sold me on Rust (going from C++) was that there is a single way errors are propagated: the Result type. No need to bother with exceptions

This isn't really true since Rust has panics. It would be nice to have out-of-the-box support for a "no panics" subset of Rust, which would also make it easier to properly support linear (no auto-drop) types.

replies(6): >>43908410 #>>43908496 #>>43908674 #>>43908939 #>>43910721 #>>43914882 #
kelnos ◴[] No.43908674[source]
I wish more people (and crate authors) would treat panic!() as it really should be treated: only for absolutely unrecoverable errors that indicate that some sort of state is corrupted and that continuing wouldn't be safe from a data- or program-integrity perspective.

Even then, though, I do see a need to catch panics in some situations: if I'm writing some sort of API or web service, and there's some inconsistency in a particular request (even if it's because of a bug I've written), I probably really would prefer only that request to abort, not for the entire process to be torn down, terminating any other in-flight requests that might be just fine.

But otherwise, you really should just not be catching panics at all.

replies(6): >>43908859 #>>43909602 #>>43910885 #>>43912418 #>>43913661 #>>43914377 #
monkeyelite ◴[] No.43913661[source]
> I probably really would prefer only that request to abort, not for the entire process to be torn down,

This is a sign you are writing an operating system instead of using one. Your web server should be handling requests from a pool of processes - so that you get real memory isolation and can crash when there is a problem.

replies(1): >>43913729 #
tsimionescu ◴[] No.43913729[source]
Even if you used a pool of processes, that's still not one process per request, and you still don't want one request crashing to tear down unrelated requests.
replies(1): >>43913790 #
monkeyelite ◴[] No.43913790[source]
I question both things. I would first of all handle each request in its own process.

If there was a special case that would not work, then the design dictates that requests are not independent and there must be risk of interference (they are in the same process!)

What I definitely do not want is a bug ridden “crashable async sub task” system built in my web program.

replies(1): >>43913832 #
tsimionescu ◴[] No.43913832[source]
This is simply a wrong idea about how to write web servers. You're giving up scalability massively, only to gain a minor amount of safety - one that is virtually irrelevant in a memory safe language, which you should anyway use. The overhead of process-per-request, or even thread-per-request, is absurd if you're already using a memory safe language.
replies(1): >>43913899 #
monkeyelite ◴[] No.43913899[source]
> You're giving up scalability massively

you’re vastly over estimating the overhead of processes and number of simultaneous web connections.

> only to gain a minor amount of safety

What you’re telling me is performance (memory?) is such a high priority you’re willing to make correctness and security tradeoffs.

And I’m saying thats ok, one of those is crashing might bring down more than one request.

> one that is virtually irrelevant in a memory safe language

Your memory safe language uses C libraries in its process.

Memory safe languages have bugs all the time. The attack surface is every line of your program and runtime.

Memory is only one kind of resource and privilege. Process isolation is key for managing resource access - for example file descriptors.

Chrome is a case study if these principles. Everybody thought isolating JS and HTML pages should be easy - nobody could get it right and chrome instead wrapped each page in a process.

replies(2): >>43914140 #>>43914180 #
simiones ◴[] No.43914180[source]
Please find one web server being actively developed using one process per request.

Handling thousands of concurrent requests is table stakes for a simple web server. Handling thousands of concurrent processes is beyond most OSs. The context switching overhead alone would consume much of the CPU of the system. Even hundreds of processes will mean a good fraction of the CPU being spent solely on context switching - which is a terrible place to be.

replies(2): >>43914418 #>>43916149 #
monkeyelite ◴[] No.43916149[source]
> Handling thousands of concurrent processes is beyond most OS

It works fine on Linux - the operating system for the internet. Have you tried it?

> good fraction of the CPU being spent solely on context switching

I was waiting for this one. Threads and processes do the same amount of context switching. The overhead of processes switch is a little higher. The main cost is memory.

replies(1): >>43920060 #
ordu ◴[] No.43920060[source]
> Threads and processes do the same amount of context switching.

Yes, therefore real webservers use a limited amount of threads/processes (in the same ballpark as a number of CPU cores). Modern approach is to use green threads which are really cheap to switch, it is like store registers, read registers and jmp.

> The main cost is memory.

The main cost is scheduling, not switching per se. Preemptive multitasking needs to deal with priorities to not waste time, and algorithms that do it are O(N) mostly. All these O(N) calculations needs to be completed multiple times per second, the higher the frequency of switching the more work to do. When you have thousands of processes it is the main cost. If you have tens of thousands it starts to bite hard.

replies(1): >>43921184 #
monkeyelite ◴[] No.43921184[source]
> The main cost is scheduling, not switching per se. Preemptive multitasking needs to deal with priorities to not waste time, and algorithms that do it

The person I am having a conversation with is advocating for threads instead of processes. How do you think threads work?

> Modern approach is to use green threads which are really cheap to switch, it is like store registers, read registers and jmp.

That’s certainly the popular approach. As I said at the beginning this approach is making a mini operating system with more bugs and less security rather than leveraging the capabilities of your operating system.

Once again, im waiting to here about your experience of maxing out processes and after that having to switch to green threads.

replies(2): >>43921455 #>>43924728 #
1. tsimionescu ◴[] No.43924728[source]
> The person I am having a conversation with is advocating for threads instead of processes. How do you think threads work?

I was certainly not, I explicitly said that thread-per-request is as bad as process-per-request. I could even agree that it's the worse of both worlds to some extent - none of the isolation, almost all of the overhead (except if you're using a language with a heavy runtime, like Java, where spawning a new JVM has a huge cost compared to a new thread in an existing JVM).

Modern operating systems provide many mechanisms for doing async IO specifically to prevent the need for spawning and switching between thousands of processes. Linux in particular has invested heavily in this, from select, to poll, to epoll, and now unto io_uring.

OS process schedulers are really a poor tool for doing massively parallel IO. They are a general purpose algorithm that has to keep in mind many possible types of heterogeneous processes, and has no insight into the plausible behaviors of those. For a constrained problem like parallel IO, it's a much better idea to use a purpose-built algorithm and tool. And they have simply not been optimized with this kind of scale in mind, because it's much more important and common use case to run quickly for a small number of processes than it is to scale up to thousands. There's a reason typical ulimit configurations are limited to around 1000 threads/processes per system for all common distros.

replies(1): >>43952050 #
2. monkeyelite ◴[] No.43952050[source]
> Linux in particular has invested heavily in this, from select, to poll, to epoll, and now unto io_uring.

Correction. People who wanted to do async IO went and added additional support for it. The primary driver is node.js.

> And they have simply not been optimized with this kind of scale in mind,

yes, processes do not sacrifice security and reliability. That’s the difference.

The fallacy here is assuming that a process is just worse for hand wavy reasons and that your language feature has fa secret sauce.

If it’s not context switching then that means you have other scheduling problems because you cannot be pre-empted.

> There's a reason typical ulimit configurations are limited to around 1000 threads/processes per system

STILL waiting to hear about your experience of maxing out Linux processes on a web server - and then fixing it with green threads.

I suspect it hasn’t happened.