←back to thread

Scala 3 slowed us down?

(kmaliszewski9.github.io)
261 points kmaliszewski | 6 comments | | HN request time: 0.001s | source | bottom
Show context
spockz ◴[] No.46182774[source]
For me the main takeaway of this is that you want to have automated performance tests in place combined with insights into flamegraphs by default. And especially for these kind of major language upgrade changes.
replies(2): >>46182923 #>>46185326 #
1. esafak ◴[] No.46182923[source]
What are folks using for perf testing on JVM these days?
replies(5): >>46183086 #>>46183506 #>>46184574 #>>46185332 #>>46188235 #
2. noelwelsh ◴[] No.46183086[source]
jmh is what I've always used for small benchmarks.
3. cogman10 ◴[] No.46183506[source]
For production systems I use flight recordings (jfrs). To analyze I use java mission control.

For OOME problems I use a heap dump and eclipse memory analysis tool.

For microbenchmarks, I use JMH. But I tend to try and avoid doing those.

4. gavinray ◴[] No.46184574[source]
async-profiler
5. spockz ◴[] No.46185332[source]
I use jmh for micro benchmarks on any code we know is sensitive and to highlight performance differences between different implementations. (Usually keep them around but not run on CI as an archive of what we tried.)

Then we do benchmarking of the whole Java app in the container running async-profiler into pyroscope. We created a test harness for this that spins up and mocks any dependencies based on api subscription data and contracts and simulates performance.

This whole mechanism is generalised and only requires teams that create individual apps to work with contract driven testing for the test harness to function. During and after a benchmark we also verify whether other non functionals still work as required, i.e. whether tracing is still linked to the right requests etc. This works for almost any language that we use.

6. ◴[] No.46188235[source]