←back to thread

160 points todsacerdoti | 1 comments | | HN request time: 0s | source
Show context
roca ◴[] No.41902494[source]
I think the importance of parallelism has been overlooked by the OP and most commenters here. Even laptops these days have at least 8 cores; a good scalable parallel implementation of a tool will crush the performance of any single-threaded implementation. JS is not a great language for writing scalable parallel code. Rust is. Not only do Rust and its ecosystem (e.g. Rayon) make a lot of parallel idioms easy, Rust's thread safety guarantees let you write shared-memory code without creating a maintenance nightmare. In JS you can't write such code at all.

So yes, you can do clever tricks with ArrayBuffers, and the JS VMs will do incredibly clever optimizations for you, but as long as your code is running on one core you cannot be competitive. (Unless your problem is inherently serial, but very few "tool"-type problems are.)

replies(3): >>41902969 #>>41903945 #>>41904571 #
nine_k ◴[] No.41903945[source]
OTOH in production settings you can run 6-8 copies of your Node app to utilize the 8 physical CPU cores without (much) contention; the JS engine runs additional threads for GC and other housekeeping. Or you can use multiple "web workers" which are VM copies in disguise. The async nature of the JS runtime may allows for a lot of extra parallelism (for waiting on I/O completion) on one core if your load is I/O-bound, as it is the case for most web backends.

Th same does not hold for frontend use, as it's for one user, and latency trumps throughput in the perception of being fast. You need great single-thread performance, and an ability to offload stuff to parallel threads where possible, to keep the overall latency low. That's basically the approach of game engines.

replies(1): >>41904323 #
chrismorgan ◴[] No.41904323[source]
When talking about tooling: around five years ago, I introduced parallelism to a Node.js-based build system, in a section that was embarrassingly parallel. It was really painful to implement, made it noticeably harder to maintain, and my vague memory is that on an 4-core/8-thread machine I only got something like a 3–4× speedup. I think workers are mature enough now that it wouldn’t be quite so bad, but it would still be fairly painful.

In Rust, I’d have added Rayon as a dependency to my Cargo.toml, inserted `use rayon::prelude::;` (or a more specific import, if I preferred) into my file, changed one `.iter()` to `.par_iter()`, and voilà, it’d have compiled (all the types would have satisfied Send) and given probably at least a 6–7× speedup.

Seriously, when you get to talking about a lot of performance tricks and such (I’m thinking things like the bit maps referred to at the end), even when they’re possible* in JavaScript, they’re frequently—I suspect even normally—way easier to implement in Rust.

replies(3): >>41906156 #>>41906278 #>>41909582 #
1. nine_k ◴[] No.41906278[source]
This is very correct for new development.

I only mean that utilizing extra CPU cores in JS is a bit easier for an API server, with tons of identical parallel requests running, and where the question is usually in RPS and tail latency, than for single-task use cases like parallelized builds.