Reputation: 4552
My goal is to compare the performance of many different C# implementations of the same thing. In the end I want to pick up the overall fastest implementation.
I'm using StopWatch
to measure the time of each run, and also I'm using big enough input to enable the implementation to run longer and find significant differences in time between runs and to be sure they are not inside the margin of error.
The problem is that I find very big fluctuations even when the exact same code run. Sometimes if I reorder the test cases it will affect the result of specific test case by 50% increase in time which is outside the margin of error and completely changing the conclusion of which implementation is faster. This is with the following code run before the test case itself and before the time is measured:
Thread.Sleep(500);
GC.Collect(2, GCCollectionMode.Forced, true, true);
GC.Collect(2, GCCollectionMode.Forced, true, true);
Thread.Sleep(500);
I understand that I can't rule out GC entirely, but still 50% fluctuations in time are too much. What can you suggest in addition to rule out as much variables as possible to find much more consistent and accurate time measurements of code runs?
Upvotes: 1
Views: 1150
Reputation: 2508
I would recommend you to use BenchmarkDotNet for such tasks. It cares about warming-up runs, about memory garbage collecting between runs, calculates mean value, standard deviation and error. It's powerful and designed for such tasks.
Upvotes: 2
Reputation: 4788
fluctuations are because of operating system background works. On the other words, if you open Task Manager
you will see that even if there is not any active program but CPU usage
constantly changes between 2~3%. Therefore, There is not a constant situation. it looks like a Floating boat on the sea.
So for your test, Not only You should rely on your code but your should collect some system data and reduce Error coefficient. and provide a table looks like a below table.
Execution cycle | CPU usage %| RAM Usage GB| Hard Usage|
-----------------------------------------------------------
100 | 2 | 1 | 2 |
1000 | 5 | 0.75 | 1 |
10000 | 10 | 0.5 | 2 |
100000 | 11 | 1.5 | 4 |
1000000 | 3 | 0.6 | 3 |
You have to reach to an equation like below:
Real Execution Performance = (g(CPU,RAM,Hard))*N mili-seconds
N:= Nominal Execution Performance
Error coefficient `g(CPU,RAM,Hard)` related to CPU related to RAM related to Hard
For example: Real Execution Performance = 0.7*(20,000) =14,000 mili-seconds
Honestly: i was not able to reach Error coefficient and i'm looking for it.
Upvotes: 1