Reputation: 1052
I am operating on the .NET Framework 4.5 using C#
I wish to measure code performance but have the additional complication that I would like to have a comparison between machines (different hardware).
One of the core goals is to come up with benchmarks that compare algorithms (algo X vs algo Y using dataset Z) which is fine if I consistently use the same exact hardware but I would also like to have the option to distribute these performance tests across many machines - which are mostly different.
How can I efficiently measure the performance of a particular machine?
I am currently using the System.Diagnostics.StopWatch class mixed with a Fibonacci suite, trying to measure how many sequences the machine can handle in order to use X ticks/milliseconds. However as you may already know this technique is not very precise.
Is the answer in Stopwatch.Frequency?
Anyone have any better suggestions?
ADDED INFORMATION -----
Example: Compare algorithms that are not multi-threaded (number of cores won't matter) say like the difference between running a sequential scan and using a red/black tree.
Upvotes: 1
Views: 481
Reputation: 20330
Not just machine is it. Could be how busy it is and with what.
Depends on whether you are looking for a minimum hardware recomendation or you are looking at tuning for an environment.
Assuming between a number of runs of two versions of an algorithm on one machine is only down to the alogorithms is iffy except for generally "slower" or "faster"
Comparing different machine beyond gross difefrences like single and multi core, is an exercise in futility.
Upvotes: 0
Reputation: 56576
Unfortunately, it's not really possible to come up with an accurate metric to compare one computer's time of A on algo X to another computer's time of B on algo Y. For example, if algo X is more memory intensive than Y and computer A has slow memory and a fast CPU compared to B, algo X will look awful, even though it may run faster on computer B than Y does.
What you can do is run each algorithm on each machine, and see which algorithm has the best average run time, memory usage, etc.
You could also try to come up with a metric to compare the machines, as you seem to try to be doing. If you make it last several seconds, and all algorithms rely on roughly the same proportion of CPU, disk, memory, and the various CPU cache levels, this should be fairly accurate.
Upvotes: 0
Reputation: 52689
simply: you can't. Or at least, not reliably.
For example, I once had a discussion with someone about static methods and threading in .NET, and I ran a test of a couple of machines. Running the same executable, the machine with single core outperformed the dual-core machine. Obviously the problem there was the .NET runtime using different internal algorithms to determine safety (ie it put more locks in on the dual-core machine than were needed on the single-core, or the runtime was different between the single-core workstation and the dual-core server). The point being that you can't even run the same executable to get a different baseline.
So if you ran algorithm X on machines A and B, and record the difference (lets say machine B was twice as fast), you cannot then run algorithm Y on machine A and assume that the algorithm would run twice as fast if executed on machine B.
The complexity of the factors you'd have to take into account would be too great. All you can do is compare the same thing on different machines, if you're testing machine performance, or different things on the same machine, etc.
Upvotes: 1