Reputation: 17981
Is it normal to have tests that are way bigger than the actual code being tested? For every line of code I am testing I usually have 2-3 lines in the unit test. Which ultimately leads to tons of time being spent just typing the tests in (mock, mock and mock more).
Where are the time savings? Do you ever avoid tests for code that is along the lines of being trivial? Most of my methods are less than 10 lines long and testing each one of them takes a lot of time, to the point where, as you see, I start questioning writing most of the tests in the first place.
I am not advocating not unit testing, I like it. Just want to see what factors people consider before writing tests. They come at a cost (in terms of time, hence money), so this cost must be evaluated somehow. How do you estimate the savings created by your unit tests, if ever?
Upvotes: 36
Views: 3983
Reputation: 19783
Tests that 2-3 times bigger are NOT normal.
Use helper classes/methods in tests.
Limit scope of tests.
Use test fixture effectively.
Use test tear down effectively.
Use unit test frameworks effectively.
And you won't have such tests anymore.
Upvotes: 1
Reputation: 30790
You might be testing the wrong thing - you should not have different tests for every method in your code.
You might have too many tests because you test implementation and not functionality - try testing how things are done test what is done.
For example if you have a customer that is entitled to get a discount on every order - create a customer with the correct data and create an order for that customer and then make sure that the final price is correct. That way you actually test the business logic and not how it's done internally.
Another reason is for big tests is lack of Isolation (a.k.a mocking) if you need to initialize difficult objects that require a lot of code try using fakes/mocks instead.
And finally if you have complicated tests it might be a smell - if you need to write a lot of code to test a simple functionality it might mean that your code is tightly coupled and your APIs are not clear enough.
Upvotes: 19
Reputation: 2841
In my practice of TDD, I tend to see larger tests (in LOC) testing the classes that are closer to the integration points of a system, i.e. database access classes, web service classes, and authentication classes.
The interesting point about these unit tests is that even after I write them I still feel uneasy about whether those classes work which leads me to write integration tests using the database, web service, or authentication service. It is only after automated integration tests have been established that I feel comfortable moving on.
The integration tests are normally much shorter than their respective unit tests and do more for me and the other developers on the team to prove that this part of the system works.
-HOWEVER-
Automated integration tests come with their own nasties that include handling the larger runtime of the tests, setting up and tearing down the external resources and providing test data.
At the end of day, I have always felt good about including automated integration tests but have almost always felt that the unit tests for these "integration" classes were alot of work for not much payoff.
Upvotes: 1
Reputation: 1196
Too much test code could mean that the actual code being tested was not designed for testability. There's a great guide on testability from Google developers that tries to address this issue.
Badly designed code means tons of test code that has only one reason: making the actual code testable. With a good design the tests can be focused more on what's important.
Upvotes: 3
Reputation: 279255
Yes, this is normal. It's not a problem that your test code is longer than your production code.
Maybe your test code could be shorter than it is, and maybe not, but in any case you don't want test code to be "clever", and I would argue that after the first time of writing, you don't want to refactor test code to common things up unless absolutely necessary. For instance if you have a regression test for a past bug, then unless you change the public interface under test, don't touch that test code. Ever. If you do, you'll only have to pull out some ancient version of the implementation, from before the bug was fixed, to prove that the new regression test still does its job. Waste of time. If the only time you ever modify your code is to make it "easier to maintain", you're just creating busy-work.
It's usually better to add new tests than to replace old tests with new ones, even if you end up with duplicated tests. You're risking a mistake for no benefit. The exception is if your tests are taking too long to run, then you want to avoid duplication, but even that might be best done by splitting your tests into "core tests" and "full tests", and run all the old maybe-duplicates less frequently.
Also see SQLite's test code to production code ratio
Upvotes: 1
Reputation: 597076
Upvotes: 1
Reputation: 95432
This is true more often than not. The key to finding out if it's a good or bad thing is to find out the reason why the tests are bigger.
Sometimes they're bigger simply because there are a lot of test cases to cover, or the spec is complex, but the code to implement the spec is not that lengthy.
Also, consider the time it takes to eliminate bugs. If unit tests prevented certain bugs from happening, ones that would've taken a lot more time to debug and fix, would you argue that TDD made your development longer?
Upvotes: 2
Reputation: 6608
Very valid and good question. I follow simple principle when needed.
Though all this takes considerable time but as long as we remember that output should be good and bug free and we adhere to above things things go fine.
Upvotes: 1
Reputation: 72755
One of the things that guides me when I write tests or do TDD (which incidentally I learnt from an answer to one of my questions on SO) is that you don't have to be as careful about design/architecture of your tests as much as you have to be so about your actual code. The tests can be a little dirty and suboptimal (code design wise) as long as they do their job right. Like all pieces of advice on design, it's to be applied judiciously and there's no substitute for experience.
Upvotes: 0
Reputation: 233150
Unit test code should follow the same best practices as production code. If you have that much unit test code it smells of a violation of the DRY principle.
Refactoring your unit tests to use Test Utility Methods should help reduce the overall unit test footprint.
Upvotes: 6
Reputation: 9474
Testing should be about finding the right balance, which depends on many different factors, such as:
I typically only write tests for the "public API" and thereby only implicitly test any assembly-internal classes used to deliver the public functionality. But as your desire for reliability and reproducibility increases you should also add additional tests.
Upvotes: 1
Reputation: 18430
Well yes, it can well happen that the tests have more loc than the actual code you are testing, but it is totally worth it when considering the time you save when debugging code.
Instead of having to test the whole application/library by hand every time you make a change you can rely on your testsuite, and if it fails, you have more accurate information on where it broke than "it does not work".
About avoiding tests: If you don't test certain parts of your code you are actually undermining the whole concept and purpose of tests and then the tests are in fact rather useless.
You do not, however, test stuff you did not wrote. That is, you assume that external libraries work properly, and generated getter/setter methods (if your language supports those) do not have to be tested, either. It is very safe to assume that it won't fail at assigning a value to a variable.
Upvotes: 0
Reputation: 35542
Well,
This is a trade-off scenario where more tests ensure stability. By stability, it not only means that the code under test is more error free and foolproof, it gives an assurance that the program will not break under any case in future. However crazy you pass arguments to a method, the code block will return properly (ofcourse with appropriate error messages whereever required).
Even more, you can write your unit test cases before even having to know the internal operation of your method under test. This is like a black box scenario where in you will first finish writing your test cases then start coding. The heavy advantage is that the development effort will become error free in fewer iterations by parallely running the test cases.
And the size of the test code does not matter at all. All that matters is the comprehensiveness and the coverage of your unit tests. Whether it just tests for namesake or its a serious test case which handles all the possible cases.
Upvotes: 1