Reputation: 696
I'm working on a Geometry library. There are 200+ unit tests.
There's a particularly stubborn test that fails whenever I select "Run All", but the test passes when I run that test individually, or use the debugger on it. I do believe the issue showed up about when I shifted over from visual studio '13 to the '15 edition.
Now some notes about the geometry library:
The objects are immutable.
The tests have no shared objects between them.
So my Question: What are the possible causes for this odd behavior?
Edit:
[Test()]
public void Plane_IntersectionWithPlane_IdenticalPlane()
{
Plane testPlane = new Plane(new Direction(Point.MakePointWithInches(2, -1, 1)),
Point.MakePointWithInches(2, 1, 2));
Line found = (testPlane.Intersection(testPlane));
Line expected = new Line(new Direction(Point.MakePointWithInches(0, -1, -1)),
Point.MakePointWithInches(2, 1, 2));
Assert.IsTrue(found.Equals(expected));
}
Upvotes: 16
Views: 756
Reputation: 3361
If this test passes in isolation but fails when running alongside other tests then it is extremely likely that you have some shared state in your classes.
If you say that all our objects are immutable then my first guess would be to look at the implementation of
Line Intersection (Plane plane);
You expect this method to return a Line which is considered the same as your "expected" line, so probably Intersection returns a line based on some shared state.
Could you show us the implementation of Intersection?
Upvotes: 0
Reputation: 1428
Please check if there are any static variables involved. Static Variables are present at AppDomain level, therefor its a possibility that a static variable set by one test case may cause side effects to other test cases.
You may run this testcase in a separate app domain first to confirm the behaviour before starting the search for static variable. I'm sorry I never tried to create a new appdomain in nunit. But one of the answers here does give a hint about how to create a new appdomain for a test case - Run unit tests in different appdomain with NUnit
Upvotes: 0
Reputation: 10839
I've come across similar issues in the past, but it's always turned out to be some unexpected interaction between the underlying code elements, or the way that the tests have been written. Some issues to check are:
I've found the best approach to track down the issue is follow an approach similar to that suggested by @Matthew Strawbridge in the comments. I add Ignore
attributes to Tests/TestFixtures to remove tests until the run-all starts working and then start adding them back in until it breaks again narrowing down the problem.
Sometimes you will also find that ignoring the test that is currently failing will cause another test to fail instead. This is a good indication that the problem is actually being caused by another one of your tests not cleaning up after itself correctly.
Looking at the code between the tests that fail/seem to cause the fail can then help you to narrow down the interaction. The error/failure reason for the tests can also help of course...
Selecting 'Run-all' runs all of the tests in a predictable order, so the tests will generally be run the same way everytime. If you select a batch of tests then it's possible that the runner may choose to run them in a different order, based around your selection order which may be why you've experience different behaviour when selecting tests, rather than using run-all.
Upvotes: 7
Reputation: 573
Try the following:
Open Nunit GUI And before you run the test change the above in the settings:
It helped me once detect my problems.
BTW: what version of NUNIT are you using ? If you don't know what is Nunit GUI then you probably didn't downloaded Nunit separately. You can get the install from here :
http://www.nunit.org/index.php?p=download
Upvotes: 3
Reputation: 13448
From what I see in the testcase, a diverging behavior is strange, but the test failing is a perfectly valid result:
A plane intersected with itself would be the plane, so if the result is restricted to being a line, then any line with 2 distinct points on the plane would be a valid result, but testing for one particular line looks like reverse engineering of execution results into test expectations.
It would be a different story if the requirement / specification is very specific about the direction of the resulting lines from identity intersection, so maybe you want to provide some more insight in this area.
I'm sorry for posting this as answer rather than a comment - missing rep.
Upvotes: 0