judek
judek

Reputation: 325

Standard .Net TDD Memory Test

Is it useful to write a standardised TDD [Test] method that would expose common memory issues ?

The set of tests could be easily, quickly applied to a method and would red-fail 'classic' .NET memory issues but would green-pass the classic solutions.

For example common memory issues could be : too much relocation by the garbage collector ; allocating too much ; too many garbage collections ( classic example prefer StringBuilder over string reallocs ); holding on to memory for too long (classic example call dispose and do not reling on finalizers ); objects inappropriately reaching g1, g2, LOH ; little leaks that add up to something significant over time, … and others.

Perhaps the code could look something like this …

[Test]
public void Verify_MyMethodUnderTest_Is_Unlikely_To_Have_Common_Memory_Problem()
{

//-Setup
var ExpectationToleranceA = ...
var ExpectationToleranceB = ...
...

//-Execute
var MeasurementA = MyClassUnderTest.MyMethodUnderTest( dependancyA ) ; 
var MeasurementB = MyClassUnderTest.MyMethodUnderTest( dependancyB ) ; 
…

//-Verfiy
Assert.That(  MeasurementA  , Is.WithinTolerance( ExpectationToleranceA  ) ) ;
Assert.That(  MeasurementB  , Is.WithinTolerance( ExpectationToleranceB  ) ) ;

}

There are other posts on memory pressure issues, but the idea here is to be able to quickly point a standard test at a method and the test would red-fail at common/classic memory pressure issues but green-pass the common solutions. A developer may then be pointed to review failing code and possibly fix the leak, change the tolerances or even remove the TDD memory pressure test.

does this idea have legs?

There is a related question here for C++ app, Memory leak detection while running unit tests, which is a similar question but not quite the same thing. Twk's question is pointing to looking at memory after all the test have run ...

My idea here is for .NET to 1) unit test each method for common memory issues 2) fail the classic memory issues 3) pass the classic fixes to classic common memory issues 4) be able to quickly throw a quick standard test at a function to see whether it exhibits classic symptoms 5) be able to upgrade the Standard TDD .Net Memory Pressure Test applied in the unit test. This implies a refactor of the above code so that upgrades to the standard test will change upgrade the memory tests applied throughout the Nunit test suite for a project.

(p.s. I know there is no Is.WithinTolerance call but I was just demonstrating an idea. ) cheers ...

Upvotes: 1

Views: 496

Answers (3)

Theo Lenndorff
Theo Lenndorff

Reputation: 4592

Good unit tests should be pointed on small pieces of code. Ideally they should be repeatable, which will be not the case, when the garbage collector is involved.

Nevertheless you can use unit testing framework facilities for doing non-unit tests (functional tests, regression tests, stress tests, ...). But you need to be aware, that you are not doing real unit tests. So don't use them in some automatic builds and don't force other developers to include such tests in their commit tests. Real unit tests may not suffer from non-unit tests!

If you want to do something like this, consider invoking the GC.Collect() method before and after the operation you want to test. Make several calls in a row to sense a growth in memory consumption more easily. Consider adding such tests in a separate overnight build (separate from real unit tests), because that may be time consuming. Invoke the tests on a separate machine, where you have full control (a open browser with some flash animation or a virus scanner during the tests may mess up your results). Store figures for the memory consumption somewhere for later review. This will make you aware of slowly increasing memory consumption during long development cycles.

Upvotes: 0

Brian Rasmussen
Brian Rasmussen

Reputation: 116401

I would say that it is a bad idea. If you want to write tests that "verify" some behavior of say the garbage collector, you're basically "testing" code that you have no control over. The exact behavior of the garbage collector is an implementation detail of the current CLR. It may change in the future thus causing your tests to "fail". In most cases you will probably not be able to change anything in your code to "fix" the tests, so you're forced to change the tests to reflect the new implementation. That's of limited use in my opinion.

Unit tests should be used to verify the intentions of your own code, so you can be notified when changes break existing code. Use them to help develop and maintain your own code.

In my experience the best results are achieved by making sure unit tests have no dependencies. Doing the kind of tests you describe means that the tests will have many dependencies to both hardware and runtime system.

Just my 5 cents.

Upvotes: 0

Andrew Hare
Andrew Hare

Reputation: 351516

Unit tests are generally best employed to test small pieces of functionality. What you are after sounds a bit more like integration testing which tests the behavior and performance of an entire system.

The problem that I see with this approach is that any given unit in your system may not generate these memory-related errors. So even if you could get something like this to work you could not guarantee that memory issues would not arise once your units were working as a whole.

So my advice would be to do integration testing in multiple states. Test the system under different levels of load and see what kind of memory issues (if any) arise. This kind of testing will be much more beneficial to you.

Upvotes: 3

Related Questions