Reputation: 1
We've just inherited a new codebase which has a bunch of automated tests. We've been 'going with the flow' and running them before each merge and deployment. Some of our staff have even been adding to the collection of tests when they write new functionality.
I've been trying my best to convince the team that we need to drop these things like a hot potatoe but there is a lot of internal bickering about it.
As I see it, the problem is as follows - these tests take a very long time to run (up to 10 minutes on one developer's older machine!). They keep breaking - I don't know about you but I hate it when my software breaks. If we just got rid of all this buggy code, we would reduce the number of bugs by a factor of 10 at the least. The developers who have been writing new ones are taking at least twice as long to do anything as the others. This is ridiculous, we are already working with tight deadlines and now progress is slowing down even further, for this project, at least. I was looking at one of the commits made by one of these devs and the feature was just over 100 LOC, the tests were close to 250 LOC!
My question is how can we strip out all of this buggy, unmaintanable code which breaks every five minutes (I don't just want to do a search and replace in case actual features start breaking as a result of it) and convince our developers to stop wasting time?
Upvotes: 0
Views: 439
Reputation: 42617
Why do the tests keep breaking? Why are they buggy and unmaintainable? That is the root of your problems.
The tests should express the requirements of your software. If they keep breaking, that may be because they are too tightly coupled to the internals of the implementation, rather than only invoking the code through public interfaces. If you can't drive the code through public interfaces, then that is usually a sign that the code could be better designed to be more testable - which tends to also make it clearer, more flexible, and robust.
If you remove the tests, then you will have no idea whether your codebase still works. Stripping out tests because they are "buggy" leaves you with the uncomfortable question: if the tests are buggy, why would the actual code be any less buggy?
On the other hand, if the (legacy) tests are cripplingly slow and awkward, then some of them at least may be more trouble that they are worth - but you'd have to consider them on a case-by-case basis, and work out how you will replace them. You could time the tests and see if there are any particularly slow ones to target - if they are JUnit tests then you get the timings automatically. You could also measure code coverage and see whether tests are overlapping, or covering very small amounts of the codebase given their size and run-time. There are some books specifically on the topic of dealing with legacy code.
When done properly, automated tests speed up development, not slow it down (assuming you want the software to actually work) because developers can make changes and refactor the code without fear that they will break something else every time they touch anything, and because they can clearly tell when a feature is working, as opposed to "it seems to work, I reckon".
Speeding up the tests would be desirable. Not all tests can be fast if they are exercising the system end-to end, but I normally keep such "feature tests" separate from the fine-grained unit tests, which should run very fast so they can be run all the time. See this posting by Michael Feathers on his definition of a unit test, and how to deal with slow tests.
If you care whether your code works or not, then the test code is just as important as the actual code - it needs to be correct, clear and factored to avoid duplication. Having at least as much test code as real code is perfectly normal, though the ratio will vary a lot with the type of code. Whether your specific example was excessive is impossible to say without seeing it.
Update: In one of your other questions you say "I've introduced the latest spiffy framework, unit & functional testing, CI, a bug tracker and agile workflow to the environment". Have you changed your mind about testing, or is this question a straw-man, perhaps? Or is your question really "should I drop all the old tests and start again"? (It doesn't read that way, given your comments about developers writing new tests...)
Upvotes: 3
Reputation: 2332
Are the tests part of a testing framework, or are they home-grown bits of testing logic that have attached themselves to your production logic?
If home-grown, you might have some luck convincing the team to refactor the tests into a structured framework. There are plenty out there now which may not have been the case when this was first set up.
As you know, the earlier you catch and repair a bug the better. Automated tests are one way people catch errors when they're still easy to fix. It might cost you extra development time to prepare and maintain the tests, but balance that against a major flaw that slips past everyone except the final customer (especially when deadlines are involved).
Upvotes: 0