There’s about a factor of 3 improvement that can be made to most code after the profiler has given up. That probably means there are better profilers than could be written, but in 20 years of having them I’ve only seen 2 that tried. Sadly I think flame graphs made profiling more accessible to the unmotivated but didn’t actually improve overall results.
The sympathy is also needed. Problems aren't found when people don't care, or consider the current performance acceptable.
> There’s about a factor of 3 improvement that can be made to most code after the profiler has given up. That probably means there are better profilers than could be written, but in 20 years of having them I’ve only seen 2 that tried.
It's hard for profilers to identify slowdowns that are due to the architecture. Making the function do less work to get its result feels different from determining that the function's result is unnecessary.
My first job out of college, I got handed the slowest machine they had. The app was already half done and was dogshit slow even with small data sets. I was embarrassed to think my name would be associated with it. The UI painted so slowly I could watch the individual lines paint on my screen.
My friend and I in college had made homework into a game of seeing who could make their homework assignment run faster or using less memory. Such as calculating the Fibonacci of 100, or 1000. So I just started applying those skills and learning new ones.
For weeks I evaluated improvements to the code by saying “one Mississippi, two Mississippi”. Then how many syllables I got through. Then the stopwatch function on my watch. No profilers, no benchmarking tools, just code review.
And that’s how my first specialization became optimization.