chills42
chills42

Reputation: 14513

Testing a test?

I primarily spend my time working on automated tests of win32 and .NET applications, which take about 30% of our time to write and 70% to maintain. We have been looking into methods of reducing the maintenance times, and have already moved to a reusable test library that covers most of the key components of our software. In addition we have some work in progress to get our library to a state where we can use keyword based testing.

I have been considering unit testing our test library, but I'm wondering if it would be worth the time. I'm a strong proponent of unit testing of software, but I'm not sure how to handle test code.

Do you think automated Gui testing libraries should be unit tested? Or is it just a waste of time?

Upvotes: 7

Views: 1306

Answers (13)

MadSeb
MadSeb

Reputation: 8234

You may want to explore a mutation testing framework ( if you work with Java : check out PIT Mutation Testing ). Another way to assess the quality of your unit testing is to look at reports provided by tools such as SonarQube ; the reports include various coverage metrics;

Upvotes: 0

Oscar Mullin
Oscar Mullin

Reputation: 21

I would suggest test the test is a good idea and something that must be done. Just make sure that what you're building to test your app is not more complex that the app itself. As it was said before, TDD is a good approach even when building automated functional tests (I personally wouldn't do it like that, but it is a good approach anyway). Unit testing you test code is a good approach as well. IMHO, if you're automating GUI testing, just go ahead with whatever manual tests are available (you should have steps, raw scenarios, expected results and so on), make sure they pass. Then, for other test that you might create and that are not already manually scripted, unit test them and follow a TDD approach. (then if you have time you could unit test the other ones). Finally, keyword driven, is, IMO, the best approach you could follow because it gives you the most flexible approach.

Upvotes: 0

Arun
Arun

Reputation: 11

From your question, I can understand that you are building a Keyword Driven Framework for performing automation testing. In this case, it is always recommended to do some white box testing on the common and GUI utility functions. Since you are interested in Unit testing each GUI testing functionality in your libraries, please go for it. Testing is always good. It is not a waste of time, I would see it as a 'value-add' to your framework.

You have also mentioned about handling test code, if you mean the test approach, please group up different functions/modules performing similar work eg: GUI element validation (presence), GUI element input, GUI element read. Group for different element types and perform a type unit test approach for each group. It would be easier for you to track the testing. Cheers!

Upvotes: 1

Lieven Keersmaekers
Lieven Keersmaekers

Reputation: 58431

You might want to take a look at Who tests the tests.

The short answer is that the code tests the tests, and the tests test the code.

Huh?

Testing Atomic Clocks
Let me start with an analogy. Suppose you are travelling with an atomic clock. How would you know that the clock is calibrated correctly?

One way is to ask your neighbor with an atomic clock (because everyone carries one around) and compare the two. If they both report the same time, then you have a high degree of confidence they are both correct.

If they are different, then you know one or the other is wrong.

So in this situation, if the only question you are asking is, "Is my clock giving the correct time?", then do you really need a third clock to test the second clock and a fourth clock to test the third? Not if all. Stack Overflow avoided!

IMPO: it's a tradeoff between how much time you have and how much quality you'd like to have.

  • If I would be using a home made test harnas, I'd test it if time permits.
  • If it's a third party tool I'm using, I'd expect the supplier to have tested it.

Upvotes: 5

Disillusioned
Disillusioned

Reputation: 14832

Answer

Yes, your GUI testing libraries should be tested.

For example, if your library provides a Check method to verify the contents of a grid against a 2-dimensional array, you want to be sure that it works as intended.

Otherwise, your more complex test cases that test business processes in which a grid must receive particular data may be unreliable. If an error in your Check method produces false negatives, you'll quickly find the problem. However, if it produces false positives, you're in for major headaches down the line.

To test your CheckGrid method:

  • Populate a grid with known values
  • Call the CheckGrid method with the values populated
  • If this case passes, at least one aspect of CheckGrid works.
  • For the second case, you're expecting the CheckGrid method to report a test failure.
  • The particulars of how you indicate the expectation will depend on your xUnit framework (see an example later). But basically, if the Test Failure is not reported by CheckGrid, then the test case itself must fail.
  • Finally, you may want a few more test case for special conditions, such as: empty grids, grid size mismatching array size.

You should be able to modify the following dunit example for most frameworks in order to test that CheckGrid correctly detects errors:

begin
  //Populate TheGrid
  try
    CheckGrid(<incorrect values>, TheGrid);
    LFlagTestFailure := False;
  except
    on E: ETestFailure do
      LFlagTestFailure := True;
  end;
  Check(LFlagTestFailure, 'CheckGrid method did not detect errors in grid content');
end;

Let me reiterate: your GUI testing libraries should be tested; and the trick is - how do you do so effectively?

The TDD process recommends that you first figure out how you intend testing a new piece of functionality before you actually implement it. The reason is, that if you don't, you often find yourself scratching your head as to how you're going to verify it works. It is extremely difficult to retrofit test cases onto existing implementations.

Side Note

One thing you said bothers me a little... you said it takes "70% (of your time) to maintain (your tests)"

This sounds a little wrong to me, because ideally your tests should be simple, and should themselves only need to change if your interfaces or rules change.

I may have misunderstood you, but I got the impression that you don't write "production" code. Otherwise you should have more control over the cycle of switching between test code and production code so as to reduce your problem.

Some suggestions:

  • Watch out for non-deterministic values. For example, dates and artificial keys can play havoc with certain tests. You need a clear strategy of how you'll resolve this. (Another answer on its own.)
  • You'll need to work closely with the "production developers" to ensure that aspects of the interfaces you're testing can stabilise. I.e. They need to be cognisant of how your tests identify and interact with GUI components so they don't arbitrarily break your tests with changes that "don't affect them".
  • On the previous point, it would help if automated tests are run whenever they make changes.
  • You should also be wary of too many tests that simply boil down to arbitrary permutations. For example, if each customer has a category A, B, C, or D; then 4 "New Customer" tests (1 for each category) gives you 3 extra tests that don't really tell you much more than the first one, and are 'hard' to maintain.

Upvotes: 2

Andrew Grimm
Andrew Grimm

Reputation: 81510

Kent Beck's book "Test-Driven Development: By Example" has an example of test-driven development of a unit test framework, so it's certainly possible to test your tests.

I haven't worked with GUIs or .NET, but what concerns do you have about your unit tests?

Are you worried that it may describe the target code as incorrect when it is functioning properly? I suppose this is a possibility, but you'd probably be able to detect that if this was happening.

Or are you concerned that it may describe the target code as functioning properly even if it isn't? If you're worried about that, then mutation testing may be what you're after. Mutation testing changes parts of code being tested, to see if those changes cause any tests to fail. If it doesn't, then either the code isn't being run, or the results of that code isn't being tested.

If mutation testing software isn't available on your system, then you could do the mutation manually, by sabotaging the target code yourself and seeing if it causes the unit tests to fail.

If you're building a suite of unit testing products that aren't tied to a particular application, then maybe you should build a trivial application that you can run your test software on and ensure it gets the failures and successes expected.

One problem with mutation testing is that it doesn't ensure that the tests cover all potential scenarios a program may encounter. Instead, it only ensures that the scenarios anticipated by the target code are all tested.

Upvotes: 2

Jonathan Hartley
Jonathan Hartley

Reputation: 16034

We generally use these rules of thumb:

1) All product code has both unit tests (arranged to correspond closely with product code classes and functions), and separate functional tests (arranged by user-visible features)

2) Do not write tests for 3rd party code, such as .NET controls, or third party libraries. The exception to this is if you know they contain a bug which you are working around. A regression test for this (which fails when the 3rd party bug disappears) will alert you when upgrades to your 3rd party libraries fix the bug, meaning you can then remove your workarounds.

3) Unit tests and functional tests are not, themselves, ever directly tested - APART from using the TDD procedure of writing the test before the product code, then running the test to watch it fail. If you don't do this, you will be amazed by how easy it is to accidentally write tests which always pass. Ideally, you would then implement your product code one step at a time, and run the tests after each change, in order to see every single assertion in your test fail, then get implemented and start passing. Then you will see the next assertion fail. In this way, your tests DO get tested, but only while the product code is being written.

4) If we factor out code from our unit or functional tests - creating a testing library which is used in many tests, then we do unit test all of this.

This has served us very well. We seem to have always stuck to these rules 100%, and we are very happy with our arrangement.

Upvotes: 2

Esko Luontola
Esko Luontola

Reputation: 73625

The tests test the code, and the code tests the tests. When you say the same intention in two different ways (once in tests and once in code), the probability of both of them being wrong is very low (unless already the requirements were wrong). This can be compared to the dual entry bookkeeping used by accountants. See http://butunclebob.com/ArticleS.UncleBob.TheSensitivityProblem

Recently there has been discussion about this same issue in the comments of http://blog.objectmentor.com/articles/2009/01/31/quality-doesnt-matter-that-much-jeff-and-joel


About your question, that should GUI testing libraries be tested... If I understood right, you are making your own testing library, and you want to know if you should test your testing library. Yes. To be able to rely on the library to report tests correctly, you should have tests which make sure that library does not report any false positives or false negatives. Regardless of whether the tests are unit tests, integration tests or acceptance tests, there should be at least some tests.

Usually writing unit tests after the code has been written is too late, because then the code tends to be more coupled. The unit tests force the code to be more decoupled, because otherwise small units (a class or a closely related group of classes) can not be tested in isolation.

When the code has already been written, then usually you can add only integration tests and acceptance tests. They will be run with the whole system running, so you can make sure that the features work right, but covering every corner case and execution path is harder than with unit tests.

Upvotes: 2

Mendelt
Mendelt

Reputation: 37483

First of all I've found it very useful to look at unit-test as "executable specifications" instead of tests. I write down what I want my code to do and then implement it. Most of the benefits I get from writing unit tests is that they drive the implementation process and focus my thinking. The fact that they're reusable to test my code is almost a happy coincidence.

Testing tests seems just a way to move the problem instead of solving it. Who is going to test the tests that test the tests? The 'trick' that TDD uses to make sure tests are actually useful is by making them fail first. This might be something you can use here too. Write the test, see it fail, then fix the code.

Upvotes: 11

SmacL
SmacL

Reputation: 22922

Personally, I don't unit test my automation libraries, I run them against a modified version of the baseline to ensure all the checkpoints work. The principal here is that my automation is primarily for regression testing, e.g. that the results for the current run are the same as the expect results (typically this equates to the results of the last run). By running the tests against a suitably modified set of expected results, all the tests shoud fail. If they don't you have a bug in your test suite. This is a concept borrowed from mutation testing that I find works well for checking GUI automation suites.

Upvotes: 1

Mark Brittingham
Mark Brittingham

Reputation: 28865

In theory, it is software and thus should be unit-tested. If you are rolling your own Unit Testing library, especially, you'll want to unit test it as you go.

However, the actual unit tests for your primary software system should never grow large enough to need unit testing. If they are so complex that they need unit testing, you need some serious refactoring of your software and some attention to simplifying your unit tests.

Upvotes: 5

Presidenten
Presidenten

Reputation: 6437

I dont think you should unit test your unit tests.

But, if you have written your own testing library, with custom assertions, keyboard controllers, button testers or what ever, then yes. You should write unit tests to verify that they all work as intented.

The NUnit library is unit tested for example.

Upvotes: 9

Axelle Ziegler
Axelle Ziegler

Reputation: 2655

There really isn't a reason why you could/shouldn't unit test your library. Some parts might be too hard to unit test properly, but most of it probably can be unit tested with no particular problem.

It's actually probably particularly beneficial to unit test this kind of code, since you expect it to be both reliable and reusable.

Upvotes: 2

Related Questions