←back to thread

205 points samspenc | 1 comments | | HN request time: 0s | source
Show context
hinkley ◴[] No.45146432[source]
Any time a library in your code goes from being used by a couple people to used by everyone, you have to periodically audit it from then on.

A set of libraries on our code had hit 20% of response time through years of accretion. A couple months to cut that in half, no architectural or cache changes. Just about the largest and definitely the most cost effective initiative we completed on that team.

Looking at flame charts is only step one. You also need to look at invocation counts, for things that seem to be getting called far more often than they should be. Profiling tools frequently (dare I say consistently) misattribute costs of functions due to pressures on the CPU subsystems. And most of the times I’ve found optimizations that were substantially larger improvements than expected, it’s been from cumulative call count, not run time.

replies(2): >>45146780 #>>45148283 #
1. smittywerben ◴[] No.45148283[source]
Dare me to say costless leaky abstraction. Then I'll point to the thread next door using Chrome profilers to diagnose Chrome internals using Scratch. Then I'll finish saying that at least Unreal has that authentic '90s feel to it.