←back to thread

130 points luu | 2 comments | | HN request time: 0s | source
Show context
kqr ◴[] No.40715797[source]
Another of those somewhat counter-intuitive results is the answer to the question "how much do we need to scale up to avoid response time regression when the request rate is doubled?"

It is very easy to blurt out "well obviously we need twice the processing power!" but if we scale to twice the processing power, then start accepting twice the request rate – we will actually be serving each request in half the time we originally did.

To many people that sounds weird; it sounds like we got something for nothing. If I invite twice as many people to a party and buy twice as many cookies, it's not like each guest will get twice as many cookies – that just leaves the originally planned number of cookies for each guest.

But for response time it comes back to the first equation in TFA:

    T = 1/μ · 1/(1 - ρ)
Doubling both arrival rate and maximum service rate leaves ρ – and the second factor with it – unchanged, but still halves the 1/μ factor, resulting in half the response time.

The appropriate amount to scale by is the k that solves the equation we get when we set the old response time T at request rate λ equal to the one at request rate 2λ and kμ processing power. This is something like

    T = 1/μ · 1/(1 - λ/μ) = 1/kμ · 1/(1 - 2λ/kμ)
but rearranges to the much simpler

    k = ρ + 1
which a colleague of mine told me interprets intuitively as "the processing power that needs to be added is exactly that which will handle an additional unit of the current utilisation on top of the current utilisation, i.e. twice as much."

This is mostly good news for people doing capacity planning in advance of events etc. If you run your systems at reasonable utilisation levels normally, you don't actually need that much additional capacity to handle peak loads.

replies(6): >>40716157 #>>40716287 #>>40716860 #>>40717110 #>>40717333 #>>40719853 #
1. kaashif ◴[] No.40717333[source]
> but if we scale to twice the processing power, then start accepting twice the request rate – we will actually be serving each request in half the time we originally did.

People usually add processing power by adding more parallelism - more machines, VMs, pods, whatever. In this case, the "blurted out" answer is correct.

If I take one second to serve a request on a machine then I add another machine and start serving twice the requests, the first machine doesn't get faster.

Maybe what you're saying is true if you make your CPUs twice as fast, but that's not usually possible on a whim.

replies(1): >>40717744 #
2. kqr ◴[] No.40717744[source]
It can still be true when scaling horizontally, depending on utilisation levels and other system characteristics, as well as the economics of errors.

Statistically speaking, I more often find the blurted out answer to be further from truth than 1 + rho.