Reputation: 2128
Our company is trying to enforce test driven development and as a development manager, I'm trying to define what that acceptance criteria really means. We're mostly follow an agile methodology and each story going to test needs some level of assurance (entrance criteria) of unit test coverage. I'm interested to hear how you guys enforce this (if you do) from a gating level effectively within your companies.
Upvotes: 1
Views: 88
Reputation:
For our Ruby on Rails app, we use a code metric gem called SimpleCov. I am not sure what language your company uses, but I am sure there is a code metric for it. SimpleCov is great for Ruby, because it provides an extensive GUI, highlighting down to the line whether code was covered, skipped (filtered out), or missed.
We just started to track our code coverage for two months now. We began at 30%, and are now near 60%. Depending on the age of your company's application, you may want to raise your coverage expectations to 80% or higher... According to SimpleCov, anything 91% or higher is "in the green", and below 80% is "in the red" (for great color analogies).
I feel that the most important thing is to make sure you have your crucial features tested -- such features may have the most lines of code to be tested. Getting those done first will drastically increase coverage.
Another thing to note, if you use a library like SimpleCov, you may be able to skip (filter out) lines of code, or even entire files, that you feel are legacy and may lower your coverage. That is another reason why our coverage almost doubled in 2 months.
Again, we are new to measuring code coverage, but strongly believe in its benefit to our current testing suite and application development.
Upvotes: 0
Reputation: 20980
What you don't want is to set any code coverage requirements. Any requirement like that can and will be gamed.
Instead, I'd look at measuring RTF: Running, Tested Features. See http://xprogramming.com/articles/jatrtsmetric/
Upvotes: 1