←back to thread

238 points hundredwatt | 3 comments | | HN request time: 0s | source
Show context
mosselman ◴[] No.42181990[source]
Hyperfine is great. I use it sometimes for some quick web page benchmarks:

https://abuisman.com/posts/developer-tools/quick-page-benchm...

As mentioned here in the thread, when you want to go into the single ms optimisations it is not the best approach since there is a lot of overhead especially the way I demonstrate here, but it works very well for some sanity checks.

replies(2): >>42183124 #>>42183449 #
Sesse__ ◴[] No.42183449[source]
> Hyperfine is great.

Is it, though?

What I would expect a system like this to have, at a minimum:

  * Robust statistics with p-values (not just min/max, compensation for multiple hypotheses, no Gaussian assumptions)
  * Multiple stopping points depending on said statistics.
  * Automatic isolation to the greatest extent possible (given appropriate permissions)
  * Interleaved execution, in case something external changes mid-way.
I don't see any of this in hyperfine. It just… runs things N times and then does a naïve average/min/max? At that rate, one could just as well use a shell script and eyeball the results.
replies(3): >>42183978 #>>42185320 #>>42185894 #
1. sharkdp ◴[] No.42185320[source]
> Robust statistics with p-values (not just min/max, compensation for multiple hypotheses, no Gaussian assumptions)

This is not included in the core of hyperfine, but we do have scripts to compute "advanced" statistics, and to perform t-tests here: https://github.com/sharkdp/hyperfine/tree/master/scripts

Please feel free to comment here if you think it should be included in hyperfine itself: https://github.com/sharkdp/hyperfine/issues/523

> Automatic isolation to the greatest extent possible (given appropriate permissions)

This sounds interesting. Please feel free to open a ticket if you have any ideas.

> Interleaved execution, in case something external changes mid-way.

Please see the discussion here: https://github.com/sharkdp/hyperfine/issues/21

> It just… runs things N times and then does a naïve average/min/max?

While there is nothing wrong with computing average/min/max, this is not all hyperfine does. We also compute modified Z-scores to detect outliers. We use that to issue warnings, if we think the mean value is influenced by them. We also warn if the first run of a command took significantly longer than the rest of the runs and suggest counter-measures.

Depending on the benchmark I do, I tend to look at either the `min` or the `mean`. If I need something more fine-grained, I export the results and use the scripts referenced above.

> At that rate, one could just as well use a shell script and eyeball the results.

Statistical analysis (which you can consider to be basic) is just one reason why I wrote hyperfine. The other reason is that I wanted to make benchmarking easy to use. I use warmup runs, preparation commands and parametrized benchmarks all the time. I also frequently use the Markdown export or the JSON export to generate graphs or histograms. This is my personal experience. If you are not interested in all of these features, you can obviously "just as well use a shell script".

replies(1): >>42185809 #
2. Sesse__ ◴[] No.42185809[source]
> This is not included in the core of hyperfine, but we do have scripts to compute "advanced" statistics, and to perform t-tests here: https://github.com/sharkdp/hyperfine/tree/master/scripts

t-tests run afoul of the “no Gaussian assumptions”, though. Distributions arising from benchmarking frequently has various forms of skew which messes up t-tests and gives artificially narrow confidence intervals.

(I'll gladly give you credit for your outlier detection, though!)

>> Automatic isolation to the greatest extent possible (given appropriate permissions) > This sounds interesting. Please feel free to open a ticket if you have any ideas.

Off the top of my head, some option that would:

* Bind to isolated CPUs, if booted with it (isolcpus=) * Binding to a consistent set of cores/hyperthreads (the scheduler frequently sabotages benchmarking, especially if your cores are have very different maximum frequency) * Warns if thermal throttling is detected during the run * Warns if an inappropriate CPU governor is enabled * Locks the program into RAM (probably hard to do without some sort of help from the program) * Enables realtime priority if available (e.g., if isolcpus= is not enabled, or you're not on Linux)

Of course, sometimes you would _want_ to benchmark some of these effects, and that's fine. But most people probably won't, and won't know that they exist. I may easily have forgotten some.

On the flip side (making things more random as opposed to less), something that randomizes the initial stack pointer would be nice, as I've sometimes seen this go really, really wrong (renaming a binary from foo to foo_new made it run >1% slower!).

replies(1): >>42186318 #
3. sharkdp ◴[] No.42186318[source]
> On the flip side (making things more random as opposed to less), something that randomizes the initial stack pointer would be nice, as I've sometimes seen this go really, really wrong (renaming a binary from foo to foo_new made it run >1% slower!).

This is something we do already. We set a `HYPERFINE_RANDOMIZED_ENVIRONMENT_OFFSET` environment variable with a random-length value: https://github.com/sharkdp/hyperfine/blob/87d77c861f1b6c761a...