Bastl
Bastl

Reputation: 2996

integration tests vs test coverage

My company has a rule to reach 75% test coverage, either by unit or by integration tests. due to the complexity of the overall system, developers tend to write integration tests (e.g. using selenium webdriver against the running application) instead of taking the burden of mocking away dependent services/classes.

I am not a friend of that and wonder what the test-coverage data for i-tests actually means:

In my opinion, a test should define expected behavior and test that. If an i-test goes deep into the application, through many services, possibly down to the DB-layer and back again, it will cover a lot of lines, but it is absolutely not clear what the expected behavior of such a covered line is. Thus the quality of the coverage data is questionable IMHO and -- even worse -- increases maintenance efforts.

Is that POV correct? How can I back it up when it comes to discussion with management ?

Upvotes: 2

Views: 3877

Answers (2)

user1747116
user1747116

Reputation:

What will happen if you couldn't reach the 75% of test coverage before deadline or even if you reach it,what about the rest 25%,so your company aren't concentrating to get a product with less obvious defects rather concentrating on the percentage of test coverage.Explain your way of approach in a credible way despite you use selenium or whatever so called.

Upvotes: 1

Jarrett Meyer
Jarrett Meyer

Reputation: 19573

Blind percentage rules about test coverage aren't really ideal. Integration tests are a perfect example of that. If something breaks, can you tell exactly what broke? What if you have multiple integration points (e.g. a single request hits a database, a web service, and sends an email)? There's too much going on there for the test to really have meaning.

Can you have 100% coverage without testing all possible outcomes? Can you write 10,000 tests and still fail to cover the most important classes in a piece of software? Blind metrics are bad and have no real outcome on quality.

Typically, it is much more valuable to have smaller, more discrete tests. We do this by stubbing out integration points and mocking them away in tests. Then you can set up the scenario, "If the database does this, then my code does this." This type of problem is much easier to solve in memory than setting up a database to do that exact thing on a repeatable basis. How would you automate an integration test a server disconnect? That's very hard. Stubbing out the database handling that specific SQL Exception is much easier and repeatable.

Upvotes: 4

Related Questions