Reputation: 4830
This question is related to PHPUnit, although it should be a global xUnit design question.
I'm writing a Unit test case for a class Image
.
One of the methods of this class is setBackgroundColor()
.
There are 4 different behaviors I need to test for this method,
array(255,255,255)
array('red' => 255, 'green' => 255, 'blue' => 255)
(this is the output format of the GD function imagecolorsforindex()
)IMG_COLOR_TRANSPARENT
At the moment, I have all this contained within 1 test in my test case called testSetBackgroundColor()
, however I'm getting the feeling these should be 4 separate tests as the test is getting quite long and doing a lot.
My question is, what should I do here? Do I encapsulate all this into 1 test of the Image test case, or do I split the above into separate tests like,
testSetBackgroundColorErrors
testSetBackgroundColorShorthandRGB
testSetBackgroundColorRGB
testSetBackgroundColorTransparent
I've put the test in question here http://pastebin.com/f561fc1ab.
Thank
Upvotes: 5
Views: 445
Reputation: 555
Yes, you should split these into four tests. Maybe you are reluctant to because it would duplicate code. I read an article that argued that unit tests should be very readable (Sorry, I don't have a reference). It went on to discuss ways to do that, but the gist of it was to write utility functions.
Upvotes: 0
Reputation: 5128
I conceptually split my testing into two categories (as quite a few TDD practitioners do): integration tests and unit tests. A unit test should test one thing, and I should be disciplined about testing the single contract that I'm writing at any given moment -- in general one method needs one test. This forces me to write small, testable methods that I have a high degree of confidence in. Which in turn tends to guide me towards writing small testable classes.
Integration tests are higher-level tests that prove interaction concerns between components that otherwise are proven to work as expected in isolation by unit tests. I write fewer of these, and they have to be applied judiciously, as there can never be full integration-level coverage. These focus on proving out the riskier areas of interaction between various components, and may use written acceptance tests as a guide.
Identifying areas that need integration testing is more of a 'feel' thing. If you've been disciplined about the unit tests, you should have a good idea where integration test needs are, i.e., those areas with deeper call stacks or cross-process interaction or the like where you know there's higher risk. Or, you may decide integration tests are a good way to prove high-level behavioral expectations that map onto the product owner's written requirements. This is a good use as well.
Upvotes: 0
Reputation: 22638
My preference is to split the tests as you describe.
Upvotes: 3
Reputation: 19220
Split it. Absolutely.
When a unit test fails it must be immediately clear what exactly is broken. If you combine the tests, you'll be debugging a unit test failure.
By the way, are you writing tests first? With TDD it's unlikely to end up with bloated tests.
Upvotes: 8