Reputation: 21764
I have written a few commonly-used methods where I've found that performance is quite important. After making several changes to fix noticeable performance bugs, I like to put some tests in place to verify that the performance does not degrade due to future changes. However, I'm finding that these tests are often very flaky (likely due to garbage collection, other activity on our automated testing server, etc.). What I'm wondering is, is there an accepted best practice for writing and maintaining these sorts of tests?
So far, most of my tests look like the following:
runMyCode(); // for assembly loading/jitting
var sw = Stopwatch.StartNew();
for (iterations) { runMyCode(); }
var time = sw.Elapsed;
// then either
Assert.Less(time, TimeSpan.FromSeconds(some reasonably generous amount of time));
// or
// time some other piece of benchmark code (e. g. a framework method
// which runMyCode() layers on top of). And do:
Assert.Less(time.TotalSeconds, someMultiplier * benchmarkTime.TotalSeconds);
One thought I had to improve stability would be to have the test store recorded times in a database and only fail if the last N times failed the benchmark.
Upvotes: 1
Views: 235
Reputation: 19011
Look this article: Learn how to create correct C# benchmarks. Main tips for you case:
You should use warm-up of processor cache for getting corrected results
You should run code on a single processor (Process.GetCurrentProcess().ProcessorAffinity = new IntPtr(1)
)
You should set high priority for thread and process
You should run GC.Collect()
before benchmark
You should run benchmark several times and take median of results
Also you can use this free open-source .NET framework for create adequate benchmark: BenchmarkDotNet.
Upvotes: 4
Reputation: 311
I don't know if you are using visual studio or mono develop but you could use speed profiler tools like:
or use default visual studio performance profiler. See this tutorial: http://msdn.microsoft.com/en-us/library/ms182372.aspx (Much more on the internet).
Upvotes: 1