←back to thread

Scala 3 slowed us down?

(kmaliszewski9.github.io)
261 points kmaliszewski | 3 comments | | HN request time: 0.001s | source
Show context
spockz ◴[] No.46182774[source]
For me the main takeaway of this is that you want to have automated performance tests in place combined with insights into flamegraphs by default. And especially for these kind of major language upgrade changes.
replies(2): >>46182923 #>>46185326 #
malkia ◴[] No.46185326[source]
Benchmarking requires a bit of different setup than the rest of the testing, especially if you want down to the ms timings.

We have continous benchmarking of one of our tools, it's written in C++, and to get "same" results everytime we launch it on the same machine. This is far from ideal, but otherwise there be either noisy neighbours, pesky host (if it's vm), etc. etc.

One idea that we thought was what if we can run the same test on the same machine several times, and check older/newer code (or ideally through switches), and this could work for some codepaths, but not for really continous checkins.

Just wondering what folks do. I can assume what, but there is always something hidden, not well known.

replies(2): >>46185438 #>>46197762 #
1. esafak ◴[] No.46197762[source]
https://en.wikipedia.org/wiki/Hardware_performance_counter can help with noisy neighbors. I am still getting into this.
replies(1): >>46202031 #
2. spockz ◴[] No.46202031[source]
Yes, that can help with detecting how much cpu was actually used during the run. But it doesn’t influence benchmark results. Not sure how exactly to use it for doing subsequent runs and comparing final performance. Then this needs to be extrapolated to final performance in production.
replies(1): >>46202420 #
3. malkia ◴[] No.46202420[source]
Yeah, what you want to know is which change caused the slowdown, or maybe improved the performance and reasonable metric behind it (for example frame-rate for a game, or something like this).