Stephen Melrose
Stephen Melrose

Reputation: 4830

Unit tests separation

This question is related to PHPUnit, although it should be a global xUnit design question.

I'm writing a Unit test case for a class Image.

One of the methods of this class is setBackgroundColor().

There are 4 different behaviors I need to test for this method,

  1. Trying to set an invalid background color. Multiple invalid parameters will be tested.
  2. Trying to set a valid background color using a short hand RGB array, e.g. array(255,255,255)
  3. Trying to set a valid background color using a standard RGB array, e.g. array('red' => 255, 'green' => 255, 'blue' => 255) (this is the output format of the GD function imagecolorsforindex())
  4. Trying to set a valid background color using the transparent constant IMG_COLOR_TRANSPARENT

At the moment, I have all this contained within 1 test in my test case called testSetBackgroundColor(), however I'm getting the feeling these should be 4 separate tests as the test is getting quite long and doing a lot.

My question is, what should I do here? Do I encapsulate all this into 1 test of the Image test case, or do I split the above into separate tests like,

I've put the test in question here http://pastebin.com/f561fc1ab.

Thank

Upvotes: 5

Views: 445

Answers (4)

PaulC
PaulC

Reputation: 555

Yes, you should split these into four tests. Maybe you are reluctant to because it would duplicate code. I read an article that argued that unit tests should be very readable (Sorry, I don't have a reference). It went on to discuss ways to do that, but the gist of it was to write utility functions.

Upvotes: 0

Dave Sims
Dave Sims

Reputation: 5128

I conceptually split my testing into two categories (as quite a few TDD practitioners do): integration tests and unit tests. A unit test should test one thing, and I should be disciplined about testing the single contract that I'm writing at any given moment -- in general one method needs one test. This forces me to write small, testable methods that I have a high degree of confidence in. Which in turn tends to guide me towards writing small testable classes.

Integration tests are higher-level tests that prove interaction concerns between components that otherwise are proven to work as expected in isolation by unit tests. I write fewer of these, and they have to be applied judiciously, as there can never be full integration-level coverage. These focus on proving out the riskier areas of interaction between various components, and may use written acceptance tests as a guide.

Identifying areas that need integration testing is more of a 'feel' thing. If you've been disciplined about the unit tests, you should have a good idea where integration test needs are, i.e., those areas with deeper call stacks or cross-process interaction or the like where you know there's higher risk. Or, you may decide integration tests are a good way to prove high-level behavioral expectations that map onto the product owner's written requirements. This is a good use as well.

Upvotes: 0

Paolo
Paolo

Reputation: 22638

My preference is to split the tests as you describe.

  • It makes it more obvious what's gone wrong when a test fails and therefore quicker to debug
  • You get the benefit of a reset of the objects to a clean starting state between test conditions
  • It makes it easier to see which tests you've included/omitted just by looking at the method names

Upvotes: 3

Ivan Krechetov
Ivan Krechetov

Reputation: 19220

Split it. Absolutely.

When a unit test fails it must be immediately clear what exactly is broken. If you combine the tests, you'll be debugging a unit test failure.

By the way, are you writing tests first? With TDD it's unlikely to end up with bloated tests.

Upvotes: 8

Related Questions