Reputation: 4694
Let's say I have a function
function (int x) {
if (x < 10) return true;
return false;
}
Ideally, you want to write 2^32 - 1 test cases to cover from INT_MIN to INT_MAX? Of course this is not practical.
To make life easier, we write test cases for
These test cases are fine but it does not cover every case. Let's say one day someone modified the function to be
function (int x) {
if (x == 12) return true;
if (x < 10) return true;
return false;
}
he will run the test and realize all the test passed. How do we make sure we cover every senario without going to extreme. Is there a key word for this issue I am describing?
Upvotes: 3
Views: 1167
Reputation: 12181
No, there's not currently a general algorithm for this that doesn't involve some kind of very intensive computation (e.g. testing lots and lots of cases), but you can write your unit tests in such a way that they'll have a higher probability of failing in the case of a change to the method. For example, in the answer given, write a test for x = 10. For the other two cases, first pick a couple of random numbers between 11 and int.Max
and test those. Then test a couple of random numbers between int.Min
and 9. The test wouldn't necessarily fail after the modification you describe, but there's a better chance that it would fail than if you had just hardcoded the value.
Also, as @GuyCoder pointed out in his excellent answer, even if you did try to do something like that, it's remarkably difficult (or impossible) to prove that there are no possible changes to a method that would break your test.
Also, keep in mind that no kind of test automation (including unit testing) is a foolproof method of testing; even under ideal conditions, you generally can't 100% prove that your program is correct. Keep in mind that virtually all software testing approaches are fundamentally empirical methods and empirical methods can't really achieve 100% certainty. (They can achieve a good deal of certainty, though; in fact, many scientific papers achieve 95% certainty or higher - sometimes much higher - so in cases like that the difference may not be all that important). For example, even if you have 100% code coverage, how do you know that there's not an error in the tests somewhere? Are you going to write tests for the tests? (This can lead to a turtles all the way down type situation).
If you want to get really literal about it and you buy into David Hume, you really can't ever be 100% sure about something based on empirical testing; the fact that a test has passed every time you've run it doesn't mean that it'll continue to pass in the future. I digress, though.
If you're interested, formal verification studies methods of deductively proving that the software (or, at least, certain aspects of the software) are correct. Note that the major issue with that is that it tends to be very difficult or impossible to achieve formal verification of a program of a complete system of any complexity, though, especially if you're using third-party libraries that aren't formally verified. (Those, along with the difficulty of learning the techniques in the first place, are some of the main reasons that formal verification hasn't really taken off outside of academia and certain very narrow industry applications).
A final point: software ships with bugs. You'd be hard-pressed to find any complicated system that was 100% defect-free at the time that it was released. As I mentioned above, there is no currently-known technique to guarantee that your testing found all of the bugs (and if you can find one you'll become a very wealthy individual), so for the most part you'll have to rely on statistical measures to know whether you've tested adequately.
TL;DR No, you can't, and even if you could you still couldn't be 100% sure that your software was correct (there might be a bug in your tests, for example). For the foreseeable future, your unit test cases will need maintenance too. You can write the tests to be more resilient against changes, though.
Upvotes: 0
Reputation: 24976
This is partly a comment partly an answer because of the way you phrased the question.
Is it possible to write a unit test that cover everything?
No. Even in your example you limit the test cases to 2^32
but what if the code is moved to a 64 bit system and then someone adds a line using 2^34
or something.
Also your question indicates to me that you are thinking of static test cases with dynamic code, e.g. the code is dynamic in that it is changed over time by a programmer, this does not mean dynamically modified by the code. You should be thinking dynamic test cases with dynamic code.
Lastly you did not note if it was white, gray or black box testing.
Let a tool analyze the code and generate the tests data.
See: A Survey on Automatic Test Data Generation
Also you asked about key words for searching.
Here is a Google search for this that I found of value:
code analysis automated test generation survey
I have never used one of these test case tools myself as I use Prolog DCG to generate my test cases and currently with a project I am doing generate millions of test cases in about two minutes and test them over a few minutes. Some of the test cases that fail I would never have thought up on my own so this may be considered overkill by some, but it works.
Since many people don't know Prolog DCGs here is a similar way explained using C# with LINQ by Eric Lippert, Every Binary Tree There Is
Upvotes: 2