user7610
user7610

Reputation: 28751

Can I use virtualization to control for differences in host performance when benchmarking for performance regressions in my application?

Is it possible to set up a virtualized environment---be it a Docker container or a qemu VM---to run benchmarks that would not be much affected by the performance of the virtualization host?

For example, that my computation benchmark would always clock ~60 seconds, probably in CPU ticks, regardless of the actual hardware, that I/O speeds will be the same even if I upgrade the host to SSD drive and so on.

From what I've found up until now, I'd say that the above is not possible. Therefore, how can I get as close as possible to the ideal, so that my benchmark done inside a virtualized environment is reproducible even for people who do not have the same hardware I do?

Upvotes: 2

Views: 30

Answers (1)

user7610
user7610

Reputation: 28751

One approach I heard about later is Virtual Time Execution.

The idea is to execute the code in a special environment which is able to collect detailed log of execution events, which can be then recalculated into actual execution time on a given hardware and OS. The accuracy was reported to be within 5-10%.

I saw this thesis about it Software Performance Engineering using Virtual Time Program Execution.

Upvotes: 1

Related Questions