Reputation: 1994
This question is about the extent to which it makes sense unit testing.
I’ve been writing a typical program that updates a database with the information coming in XML messages. I thought about the unit tests it needed. The program inserts or updates records according to complicated rules, which spawns many different cases. At first, I decided to test the following several conditions for each case:
It looked to me that the third type of test really made sense. But soon I found that this is not so easy to implement, because you actually need to kind of snapshot the database and then compare it with the modified one. I quickly started getting annoyed by the fact I needed to write such tests for different cases of database modification, while these tests were not much valuable and informative in sense of specification and design of the production code.
Then I thought, maybe, I’m testing too much? And if not, so if I test that the program does NOT modify irrelevant records, then why don’t I test that it:
I got completely confused where to draw the borderline. Where would you draw it?
UPDATE
I read many useful hints in the answers and marked one as the solution, because it has more useful ideas to me, but still it's not clear to me how to properly test database updates. Does it make sense testing that the program doen't change too much? And if so, then how thoroughly?
Upvotes: 5
Views: 182
Reputation: 116306
If one had an infinite amount of time and resources (and nothing better to do with them :-), it would be fine (albeit still rather boring mostly) to test everything which is possible to test. However, we live in real life where we have time and resource pressures, thus we must prioritize our testing efforts. Kent Beck summed up nicely as "test everything which could possibly break".
Note that this is also true to integration/system/acceptance testing to some extent, but the difference is that unit tests are white box tests, so you are supposed to know both
This allows you to focus your unit testing efforts on the real issues. Does your method write data to an output stream? Yes? Then it makes perfect sense to test what is (not) written there. Does your method do anything related to mailing? No? Then there is no point testing that it does not send a mail to Santa.
System/acceptance tests are different in that
Therefore it makes more sense to cast a broader net of tests, to test more "crazy" scenarios and verify more of the things the program should not do.
Upvotes: 1
Reputation: 2321
I think that a good unit test is the one that tests all the lines of code of a function and all the requirements/uses cases it covers.
For simple functions it usually means testing the correct functionality and the errors it handles, and verifying that it is the expected behavior.
I would draw a line there for unit testing, I would also keep in mind the other concerns you have and take care of them in system or integration testing. That means testing the application in any number of scenarios and then check that the database is still correct, no file was deleted and santa didn't received any email. You can test any number of possibilities, of course the more unreasonable things you test, the more time you are wasting.
Upvotes: 0
Reputation: 39773
It's easier to test what it should do, then what it should not. A few exceptions:
Upvotes: 2
Reputation: 499382
You draw the line at the point where tests stop being useful and no longer tell you something about your code.
Is it useful to know that your software doesn't send emails to Santa? No. Then don't test for that.
Knowing that the data access layer is doing the right thing is useful - that the right updates are happening.
Upvotes: 4