Reputation: 18013
We have several different optimization algorithms that produce a different result for each run. For example the goal of the optimization could be to find the minimum of a function, where 0 is the global minima. The optimization runs returns data like this:
[0.1, 0.1321, 0.0921, 0.012, 0.4]
Which is quite close to the global minima, so this is ok. Our first approach was to just choose a threshold, and let the unit test fail if a result occured that was too high. Unfortunately, this does not work at all: The results seem to have a gauss distribution, so, although unlikely, from time to time the test failed even when the algorithm is still fine and we just had bad luck.
So, how can I test this properly? I think quite a bit of statistics are needed here. It is also important that tests are still fast, just letting the test run a few 100 times and then take the average will be too slow.
Here are some further clarifications:
For example I have an algorithm that fits a Circle into a set of points. It is extremly fast but does not always produce the same result. I want to write a Unit test to guarantee that in most cases it is good enough.
Unfortunately I cannot choose a fixed seed for the random number generator, because I do not want to test if the algorithm produces the exact same result as before, but I want to test something like "With 90% certainty I get a result with 0.1 or better".
Upvotes: 16
Views: 1537
Reputation: 30089
It sounds like your optimizer needs two kinds of testing:
Since the algorithm involves randomization, (1) is difficult to unit-test. Any test of a random process will fail some proportion of the time. You need to know some statistics to understand just how often it should fail. There are ways to trade off between how strict your test is and how often it fails.
But there are ways to write unit tests for (2). For example, you could reset the seed to a particular value before running your unit tests. Then the output is deterministic. That would not allow you to assess the average effectiveness of the algorithm, but that's for (1). Such a test would serve as a trip wire: if someone introduced a bug into the code during maintenance, a deterministic unit test might catch the bug.
There may be other things that could be unit tested. For example, maybe your algorithm is guaranteed to return values in a certain range no matter what happens with the randomized part. Maybe some value should always be positive, etc.
Update: I wrote a chapter about this problem in the book Beautiful Testing. See Chapter 10: Testing a Random Number Generator.
Upvotes: 15
Reputation: 18013
Thanks for all the answers, I am now doing this:
This way whenever a test looks like it is going to fail, it is recalculated so often until it is pretty sure that it really has failed.
This seems to work, but I am not quite satisfied because I am only testing the median result.
Upvotes: 1
Reputation: 49649
Your algorithms probably have a random component. Bring it under control.
You can either
The second option is probably the best, since that will make it easier for you to reason what the correct result of the algorithm is.
When unit-testing algorithms, what you want to verify is that you have correctly implemented the algorithm. Not whether the algorithm does what it is supposed to do. Unit-tests should not treat the code-under-test as a black box.
You may want to have a separate "performance"-test to compare how different algorithms perform (and whether they actually work), but your unit-tests are really for testing your implementation of the algorithm.
For example, when implementing the Foo-Bar-Baz Optimization Algorithm (TM) you might have accidentally written x:=x/2 instead of x:=x/3. This might mean that the algorithm works slower, but still finds the same algorithm. You will need white-box-testing to find such an error.
Edit:
Unfortunately I cannot choose a fixed seed for the random number generator, because I do not want to test if the algorithm produces the exact same result as before, but I want to test something like "With 90% certainty I get a result with 0.1 or better".
I can not see any way to make a test that is both automatic-verifiable and stochastic. Especially not if you want to have any chance of distinguishing real errors from statistic noise.
If you want to test "With 90% certainty I get a result with 0.1 or better", I would suggest something like:
double expectedResult = ...;
double resultMargin = 0.1;
int successes = 0;
for(int i=0;i<100;i++){
int randomSeed = i;
double result = optimizer.Optimize(randomSeed);
if(Math.Abs(result, expectedResult)<resultMargin)
successes++;
}
Assert.GreaterThan(90, successes);
(Note that this test is deterministic).
Upvotes: 7
Reputation: 3117
A unit test should never have an unknown pass/fail state. If your algorithm is returning different values when run with the same inputs multiple times, you are probably doing something screwy in your algorithm.
I would take each of the 5 optimization algorithms and test them to make sure that given a set of inputs x, you get back an optimized value of y every time.
EDIT: To address random components of your system, you can either introduce the ability to pass in the seed for the random number generator to be used, or you can utilize a mocking library (ala RhinoMocks) to force it to use a particular number when the RNG is asked for a random number.
Upvotes: 7
Reputation: 121772
Both jUnit and NUnit can assert floating point datatypes with a tolerance/delta value. I.e. you test if the output is the correct value give or take some decimal. In your case the correct value you want to check is 0, with tolerance 0.5 if you want all the values in the given output to pass (or 0.20 with tolerance +/-0.20).
Because of the random nature of your results, you may want to unit test parts of the algorithm to make sure it really does what it is supposed to.
Upvotes: 0
Reputation: 28865
I would suggest that, rather than having your test run against the code producing the gaussian distribution, you create a Monte Carlo-type algorithm that runs the Method many times and then test the overall distribution of results using the appropriate distribution model. If it is an average, for example, then you will be able to test against a firm threshold. If it is more complex, you'll need to create code that models the appropriate distribution (e.g. do values < x make up y% of my results).
Keep in mind that you aren't testing the number generator, you are testing the Unit that generates the values!
Upvotes: 1
Reputation: 1500765
Let the tests run, and if any of them fail, rerun just those tests 50 times and see what proportion of the time they fail. (In an automated way, of course.)
Upvotes: 5