I think there’s a lot of bias in the samples one tends to see:
- you’re less likely to hear about a failed rewrite
- rewrites often gain from having a much better understanding of the problem/requirements than the existing solution which was likely developed more incrementally
- if you know you will care about performance a lot, you hopefully will think about how to architect things in a way that is capable of achieving good performance. (Non-cpu example: if you are gluing streams of data with processing steps together, you may not think much about buffering; if you know you will care about throughput, you will probably have to think about batching and maybe also some kind of fan-out->map->fan-in; if you know you will care about latency you will probably think about each extra hop or batch-building step)
- hopefully people do a bit of napkin math to decide if rewriting something to be faster will achieve the goals, and so you only see the rewrites that people thought would be beneficial (because eg you’re touching a lot of memory so a better memory layout could help)
I feel like you’re much more likely to see ‘we found some JavaScript that was too useful for its own good, figured out how to rewrite it with better algorithms/data structures, concurrency, and sims instructions, which we used rust to get’ than ‘our service receives one request, sends 10 requests to 5 different services, collects the results and responds; we rewrote it in rust but the performance is the same because it turns out most of what our service did was waiting’.