Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".
Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".
This isn't really true since Rust has panics. It would be nice to have out-of-the-box support for a "no panics" subset of Rust, which would also make it easier to properly support linear (no auto-drop) types.
Even then, though, I do see a need to catch panics in some situations: if I'm writing some sort of API or web service, and there's some inconsistency in a particular request (even if it's because of a bug I've written), I probably really would prefer only that request to abort, not for the entire process to be torn down, terminating any other in-flight requests that might be just fine.
But otherwise, you really should just not be catching panics at all.
This is a sign you are writing an operating system instead of using one. Your web server should be handling requests from a pool of processes - so that you get real memory isolation and can crash when there is a problem.
If there was a special case that would not work, then the design dictates that requests are not independent and there must be risk of interference (they are in the same process!)
What I definitely do not want is a bug ridden “crashable async sub task” system built in my web program.
you’re vastly over estimating the overhead of processes and number of simultaneous web connections.
> only to gain a minor amount of safety
What you’re telling me is performance (memory?) is such a high priority you’re willing to make correctness and security tradeoffs.
And I’m saying thats ok, one of those is crashing might bring down more than one request.
> one that is virtually irrelevant in a memory safe language
Your memory safe language uses C libraries in its process.
Memory safe languages have bugs all the time. The attack surface is every line of your program and runtime.
Memory is only one kind of resource and privilege. Process isolation is key for managing resource access - for example file descriptors.
Chrome is a case study if these principles. Everybody thought isolating JS and HTML pages should be easy - nobody could get it right and chrome instead wrapped each page in a process.
It's less the actual overhead of the process but the savings you get from sharing. You can reuse database connections, have in-memory caches, in-memory rate limits and various other things. You can use shared memory which is very difficult to manage or an additional common process, but either way you are effectively back to square one with regards to shared state that can be corrupted.
I just said one of the costs of those saving is crashing may bring down multiple requests - and you should design with that trade off.