Except your browser taking 180% of available ram maybe.
By the way, the world could also have some bug free software, if anyone could afford to pay for it.
Except your browser taking 180% of available ram maybe.
By the way, the world could also have some bug free software, if anyone could afford to pay for it.
> Except your browser taking 180% of available ram maybe.
For most business users, running the browser is pretty much the only job of the laptop. And using virtual memory for open tabs that aren't currently open is actually not that bad. There's no need to fit all your gazillion tabs into memory; only the ones you are looking at. Browsers are pretty good at that these days. The problem isn't that browsers aren't efficient but that we simply push them to the breaking content with content. Content creators simply expand their resource usage whenever browsers get optimized. The point of optimization is not saving cost on hardware but getting more out of the hardware.
The optimization topic triggers the OCD of a lot of people and sometimes those people do nice things. John Carmack built his career when Moore's law was still on display. Everything he did to get the most out of CPUs was super relevant and cool but it also dated in a matter of a few years. One moment we were running doom on simple 386 computers and the next we were running Quake and Unreal with shiny new Voodoo GPUs on a Pentium II pro. I actually had the Riva 128 as my first GPU, which was one of the first products that Nvidia shipped running Unreal and other cool stuff. And while CPUs have increased enormously in performance, GPUs have increased even more by some ridiculous factor. Nvidia has come a long way since then.
I'm not saying optimization is not important but I'm just saying that compute is a cheap commodity. I actually spend quite a bit of time optimizing stuff so I can appreciate what that feels like and how nice it is when you make something faster. And sometimes that can really make a big difference. But sometimes my time is better spent elsewhere as well.
That may be fine if you can actually improve the user experience by throwing hardware at the problem. But in many (most?) situations, you can't.
Most of the user-facing software is still single-threaded (and will likely remain so for a long time). The difference in single-threaded performance between CPUs in wide usage is maybe 5x (and less than 2x for desktop), while the difference between well optimized and poorly optimized software can be orders of magnitude easily (milliseconds vs seconds).
And if you are bottlenecked by network latency, then the CPU might not even matter.