Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".
Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".
This isn't really true since Rust has panics. It would be nice to have out-of-the-box support for a "no panics" subset of Rust, which would also make it easier to properly support linear (no auto-drop) types.
Even then, though, I do see a need to catch panics in some situations: if I'm writing some sort of API or web service, and there's some inconsistency in a particular request (even if it's because of a bug I've written), I probably really would prefer only that request to abort, not for the entire process to be torn down, terminating any other in-flight requests that might be just fine.
But otherwise, you really should just not be catching panics at all.
This is a sign you are writing an operating system instead of using one. Your web server should be handling requests from a pool of processes - so that you get real memory isolation and can crash when there is a problem.
If there was a special case that would not work, then the design dictates that requests are not independent and there must be risk of interference (they are in the same process!)
What I definitely do not want is a bug ridden “crashable async sub task” system built in my web program.
you’re vastly over estimating the overhead of processes and number of simultaneous web connections.
> only to gain a minor amount of safety
What you’re telling me is performance (memory?) is such a high priority you’re willing to make correctness and security tradeoffs.
And I’m saying thats ok, one of those is crashing might bring down more than one request.
> one that is virtually irrelevant in a memory safe language
Your memory safe language uses C libraries in its process.
Memory safe languages have bugs all the time. The attack surface is every line of your program and runtime.
Memory is only one kind of resource and privilege. Process isolation is key for managing resource access - for example file descriptors.
Chrome is a case study if these principles. Everybody thought isolating JS and HTML pages should be easy - nobody could get it right and chrome instead wrapped each page in a process.
Handling thousands of concurrent requests is table stakes for a simple web server. Handling thousands of concurrent processes is beyond most OSs. The context switching overhead alone would consume much of the CPU of the system. Even hundreds of processes will mean a good fraction of the CPU being spent solely on context switching - which is a terrible place to be.
It works fine on Linux - the operating system for the internet. Have you tried it?
> good fraction of the CPU being spent solely on context switching
I was waiting for this one. Threads and processes do the same amount of context switching. The overhead of processes switch is a little higher. The main cost is memory.
Yes, therefore real webservers use a limited amount of threads/processes (in the same ballpark as a number of CPU cores). Modern approach is to use green threads which are really cheap to switch, it is like store registers, read registers and jmp.
> The main cost is memory.
The main cost is scheduling, not switching per se. Preemptive multitasking needs to deal with priorities to not waste time, and algorithms that do it are O(N) mostly. All these O(N) calculations needs to be completed multiple times per second, the higher the frequency of switching the more work to do. When you have thousands of processes it is the main cost. If you have tens of thousands it starts to bite hard.
The person I am having a conversation with is advocating for threads instead of processes. How do you think threads work?
> Modern approach is to use green threads which are really cheap to switch, it is like store registers, read registers and jmp.
That’s certainly the popular approach. As I said at the beginning this approach is making a mini operating system with more bugs and less security rather than leveraging the capabilities of your operating system.
Once again, im waiting to here about your experience of maxing out processes and after that having to switch to green threads.
Are they? I looked back and I've found this quote of them: "The overhead of process-per-request, or even thread-per-request, is absurd if you're already using a memory safe language." Doesn't seem as an advocacy for thread-per-request to me.
> As I said at the beginning this approach is making a mini operating system with more bugs and less security rather than leveraging the capabilities of your operating system.
Lets look at Apache for example. It starts a few processes and/or threads, but then each thread deals with a lot of connections. The threads Apache starts are for spreading work over several CPUs and maybe to overcome some limits of select/poll/epoll. The main approach is to track a state of a connection, and when something happens on a socket, Apache find the state of the connection and deals with events on the socket. Then it stores the new state and moves to deal with other sockets in the same manner.
It is like green threads but without green threads. Green threads streamlines all this state keeping by allowing each connection to have it's own stack. And I'd say it is easier to do right than to write a finite automata for HTTP/HTTPS.
> Once again, im waiting to here about your experience of maxing out processes and after that having to switch to green threads.
Oh, I didn't. A long long time ago I was reading stuff on networking. All of it was in one opinion: 10k kernel tasks maybe a tolerable solution, but 100k is bad. IIRC Apache had a document describing its internal architecture and explaining why it is as it is.
So I wouldn't even try to start thousands of threads. I mean I tried to start 1000s of processes when I was young and learned about fork-bombs, and this experience confirmed it for me, that 1000s of processes is not a really good idea.
Moreover I completely agree with them: if you use a memory-safe language, then it is strange to pay costs for preemptive multitasking just to have separate virtual address spaces. I mean, it will be better to get a virtual machine with JIT compiler, and run code for different connection on different instances of a virtual machine. O(1) complexity of cooperative switching will beat O(N) complexity of preemptive switching. To my mind hardware memory management is overrated.
Apache has years of engineering work - and almost weekly patches to fix issues related to security. Many of these security issues would go away if they were not using special technique to optimize performance.
But the best part of the web is its modular. So now your application doesn’t need to that. It can leverage those benefits without complexity cascade.
For example, Apache can manage more connections than your application needs running processes for.
> I was reading stuff on networking….
That’s exactly my point. Too many people are repeating advice from Google or Facebook and not actually thinking about real problems they face.
Can you serve more requests using specialized task management? Yes. You can make a mini-OS with fewer features to squeeze out more scheduling performance and that’s what some big companies did.
But you will pay for that with reduced security and reliability. To bring it back to my original complaint - you must accept that a crash can bring down multiple requests.
And it’s an insane default to design Rust around. It’s especially confusing to make all these arguments about how “unsafe” languages are, but then ignore OS safety in hopes of squeezing out a little more perf.
> So I wouldn't even try to start thousands of threads.
Please try it before arguing it doesn’t work. Fork bombing is recursive and unrelated.
> if you use a memory-safe language, then it is strange to pay costs for preemptive multitasking just to have separate virtual address spaces
Then why do these “memory-safe” languages need constant security patches? Why does chrome need to wrap each page’s JS in its own process?
In theory you’re right. If they are actually memory-safe then you don’t need to consider address spaces. But in practice the attack surface is massive and processes give you stronger invariants.