←back to thread

160 points todsacerdoti | 3 comments | | HN request time: 1.34s | source
Show context
noname120 ◴[] No.41898858[source]
> I just don’t think we’ve exhausted all the possibilities of making JavaScript tools faster

Rewriting in more performant languages spares you from the pain of optimization. These tools written in Rust are somehow 100× as fast despite not being optimized at all.

JavaScript is so slow that you have to optimize stuff, with Rust (and other performant languages) you don't even need to because performance just doesn't bubble up as a problem at all, letting you focus on building the actual tool.

replies(6): >>41898915 #>>41898923 #>>41898937 #>>41898975 #>>41901141 #>>41901628 #
1. dan-robertson ◴[] No.41898975[source]
I think there’s a lot of bias in the samples one tends to see:

- you’re less likely to hear about a failed rewrite

- rewrites often gain from having a much better understanding of the problem/requirements than the existing solution which was likely developed more incrementally

- if you know you will care about performance a lot, you hopefully will think about how to architect things in a way that is capable of achieving good performance. (Non-cpu example: if you are gluing streams of data with processing steps together, you may not think much about buffering; if you know you will care about throughput, you will probably have to think about batching and maybe also some kind of fan-out->map->fan-in; if you know you will care about latency you will probably think about each extra hop or batch-building step)

- hopefully people do a bit of napkin math to decide if rewriting something to be faster will achieve the goals, and so you only see the rewrites that people thought would be beneficial (because eg you’re touching a lot of memory so a better memory layout could help)

I feel like you’re much more likely to see ‘we found some JavaScript that was too useful for its own good, figured out how to rewrite it with better algorithms/data structures, concurrency, and sims instructions, which we used rust to get’ than ‘our service receives one request, sends 10 requests to 5 different services, collects the results and responds; we rewrote it in rust but the performance is the same because it turns out most of what our service did was waiting’.

replies(1): >>41898997 #
2. winwang ◴[] No.41898997[source]
Only semi-relevant, but there's also the fact that lower level languages can auto-optimize more deeply -- but that's also more my intuition (would love to get learnt if I'm wrong).

For example, I'd expect that Rust (or rustc I guess) can auto-vectorize more than Node/Deno/etc.

replies(1): >>41906642 #
3. WorldMaker ◴[] No.41906642[source]
Ahead of Time, perhaps. (Of course the benefit of AOT is that you can take all the time in the world and only slow down the developer cycle without impacting users. In theory you can always build a slower AOT compiler with more optimizations, even/especially for higher level languages like JS. You can almost always trade off more build time and built executable size for more runtime optimizations. High level languages can almost always use Profiler Guided Optimization to do most things low level languages use low level data type optimization paths for.)

A benefit to a good JIT, though, is that you can converge to such optimizations over time based on practical usage information. You trade off less optimized startup paths for Profiler Guided Optimization on the live running application, in real time based on real data structures.

JS has some incredible JITs very well optimized for browser tab life-cycles. They can eventually optimize things at a low level far further than you might expect. The eventually of a JIT is of course the rough trade-off, but this also is well optimized for a lot of the browser tab life-cycle: you generally have an interesting balance of short-lived tabs where performance isn't critical and download size is worth minimizing, versus tabs that are short-lived but you return to often and can cache compiled output so each new visit is slightly faster than the last, versus a few long-lived tabs where performance matters and they generally have plenty of time to run and optimize.

This is why Node/Deno/et al excel in long-running server applications/services (including `--watch` modes) and "one-off"/"single run" build tools can be a bit of a worst case, they may not give the JIT enough time or warning to fully optimize things, especially when they start with no previous compilation cache every time. (The article points out that this is something you can turn on now in Node.)