GridDragon
GridDragon

Reputation: 3135

How to handle unreachable Exceptions with TDD

I'm still getting my head around parts of TDD. I've got a new library I'm writing so this seems like a good opportunity to try it out.

What I've read on TDD advertises 100% code coverage, but that seems a bit ivory tower, so I configured JaCoco to require 90% code coverage to give me some breathing room.

I started out on the code that loads the KeyStore. There's lots of boiler plate code and lots of checked exceptions. So starting here makes my life easier down the road. Everything looks good, and my tests are passing. But code coverage is only at 49%. Looking through the code, everything is covered except for what I'd call "Impossible Exceptions" like this one:

public void saveKey(Key key, String alias) {
    KeyStore.SecretKeyEntry entry = new KeyStore.SecretKeyEntry(new SecretKeySpec(key.getMaterial(), "AES"));

    try {
        keyStore.setEntry(alias, entry, new KeyStore.PasswordProtection(password));
    } catch (KeyStoreException e) {
        throw new UnexpectedErrorException("Failed to save the key", e);
    }
}

In this particular case, according to the docs, KeyStoreException is thrown if the keyStore hasn't been initialized. I'm coding defensively and have guaranteed that the keyStore will be initialized at this point. So the KeyStoreException cannot be thrown. But it's a checked exception so I have to deal with it, so I'm wrapping it in a custom RuntimeException.

Problem is I have no way to trigger this error in my unit tests. In fact I've done everything I can to make sure that it won't happen.

Given cases like this how does TDD achieve a mythical 100% coverage?

I could mock KeyStore, but advice from Mockito is "Don't mock types you don't own." So I'd rather not. Also KeyStore relies a couple static methods where Mockito doesn't help, and I don't want to bring in PowerMock for a simple case, and I'm not that convinced that throwing more libraries at the problem is an ideal solution.

So:

My anticipated solution for now is to move my configured 90% code coverage restriction down to 40 or 50 percent until I have more classes to bring my over-all average coverage up. But before I do that is there something I'm missing?

Upvotes: 3

Views: 588

Answers (3)

Fabio
Fabio

Reputation: 32445

Actually TDD have nothing to do with code coverage.
One of the rule of TDD is

You are not allowed to write any production code unless it is to make a failing unit test pass.

So if you practicing Test-Driven Development and follow rule above then you will always have 100% code coverage.

Code coverage is measuring tool for unit testing, for case when you write tests after you wrote production code. With code coverage you "check" that you didn't forget to write test for some case in your logic.

Upvotes: 0

Timothy Truckle
Timothy Truckle

Reputation: 15622

What I've read on TDD advertises 100% code coverage.

This is a common misconception.

When we do TDD we are not aiming for 100% code coverage. We are aiming for 100% requirement coverage. And no: 100% code coverage does not imply 100% requirement coverage and vice versa...

Unfortunately we cannot measure requirement coverage. So the only way to get it is doing TDD consequently.


In this particular case, according to the docs, KeyStoreException is thrown if the keyStore hasn't been initialized. I'm coding defensively and have guaranteed that the keyStore will be initialized at this point. So the KeyStoreException cannot be thrown.

Given cases like this how does TDD achieve a mythical 100% coverage?

I could mock KeyStore, but advice from Mockito is "Don't mock types you don't own."

UnitTests verify your units behavior in isolation which means that any other units your code under test collaborates with need to be replaced by test doubles. There is a discussion out there at which detail level we should cut our units but "types you don't own" are definitely not part of your code. Not mocking them means that your test depends on this external code. And if it fails you don't know if your code has a problem or the external code.
Therefore "Don't mock types you don't own." is a rather bad advice.

Upvotes: 0

VoiceOfUnreason
VoiceOfUnreason

Reputation: 57239

Like most aspects of programming, testing requires thoughtfulness. TDD is a very useful, but certainly not sufficient, tool to help you get good tests. If you are testing thoughtfully and well, I would expect a coverage percentage in the upper 80s or 90s. I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing. -- Martin Fowler, 2012

For the core of your program, 100% coverage might be an achievable target; after all, you never introduce code without a failing test that requires is, and "refactoring" shouldn't be introducing unused branches.

But code that interacts with the boundary... 100% coverage becomes a lot more expensive, and you eventually reach the tipping point:

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence... -- Kent Beck, 2008

If KeyStoreException were unchecked, you wouldn't have used this particular example; the problems of checked exceptions are somewhat unique to Java.

David Parnas, writing in 1972, gave us some hints about how we might manage this problem. You can design your solution so that it hides your decision to use a java.security.KeyStore. In other words, you create an interface that describes the API that you wish a keystore would have, and write all of your code to that interface. Only the implementation needs to know the gritty details of the exception management; only your implementation needs to know that you decided that KeyStore exceptions are unrecoverable.

Another way of thinking about the same idea; we're trying to partition the code into two piles: the core contains complicated code that is easy to test; the boundary contains simple code that is difficult to test. Your touchstone for boundary code is Hoare: "so simple that there are obviously no deficiencies".

Use test coverage heuristics that are appropriate for each case.

My anticipated solution for now is to move my configured 90% code coverage restriction down to 40 or 50 percent until I have more classes to bring my over-all average coverage up.

Using a ratchet to prevent a regression in your coverage statistic is a good idea.

Upvotes: 4

Related Questions