We have continous benchmarking of one of our tools, it's written in C++, and to get "same" results everytime we launch it on the same machine. This is far from ideal, but otherwise there be either noisy neighbours, pesky host (if it's vm), etc. etc.
One idea that we thought was what if we can run the same test on the same machine several times, and check older/newer code (or ideally through switches), and this could work for some codepaths, but not for really continous checkins.
Just wondering what folks do. I can assume what, but there is always something hidden, not well known.
In addition you can look at total cpu seconds used, memory allocation on kernel level, and specifically for the jvm at the GC metrics and allocation rate. If these numbers change significantly then you know you need to have a look.
We do run this benchmark comparison in most nightly builds and find regressions this way.