Stephen H. Anderson
Stephen H. Anderson

Reputation: 1068

What kind of tests to use for legacy projects where we've just started using continuous integration and delivery

We have 12 legacy projects. One is an old Visual Basic application programmed 9 years ago, other are C# (.NET) aplications, 2 java projects, and os on.

We've just finished cleaning and creating a repository for each project (some of them were just folders sitting on different computers...).

We have configured Jenkins with many useful plugins, bought two book: Continuous Integration and Continuous Delivery, not fully read yet.

We defined a deployment pipeline for our projects. All are automatically being compiled after a commit to the repository and analysis of code is being done automatically (cyclomatic complexity, etc.).

However, we would like to know if there are tests (easy to add) that we can be using for our projects. We know about unit tests, however, writting unit tests for these projects would be too time consuming (if possible at all).

Are there other kinds of tests we could add or other useful things we could be adding to our pipeline ?

For some of the programs we are automatically generating an installer.

Also, at the end of the pipeline we have a manual step that moves the binary (installer) to a public folder on our apache server where people in the company can easily get the last stable binary (stable here being an application we manually install and test (exploratory test I think it's called) and if we don't see anything wrong, we promote it as a stable release).

Upvotes: 0

Views: 108

Answers (3)

Florian
Florian

Reputation: 5051

Hm, maybe a bit late a year afterwards ... but anyway, for people passing by here.

Continuous delivery is the premium league of agile techniques. With a huge block of legacy code, you will probably not become "continuous" for quite some time. Learn the ideas but don't get frustrated if you cannot reach them yet.

Setting up repositories and pipelines still is a good idea. The repositories allow you to roll back defective changes quickly. The pipelines give you the automation to run the large number of tests you will need to get on top of your code.

You don't need any other tools or more plugins. Your programming languages most probably already have everything you need. What you need is know-how, conviction and patience. Here is what worked for our teams:

Get Michael Feather's Working Effectively with Legacy Code. It gives you the techniques you need to start modifying the legacy code without fear of breaking it. You think it's not possible to write unit tests for your legacy code? You're wrong about that. Feathers tells you how to do it. This is the know-how part.

Also, learn what characterization tests are and how they work. You lost staff and thereby expert knowledge. That code that nobody seems to know or remember what it does? Characterization tests help you probe it and enable you to refactor it.

Don't start a huge project to "make your code great again". It took a while to write the code, it will take a while to fix it. Get your code under test piece by piece. Whenever you develop a new feature, write tests for the feature plus the legacy code it immediately connects to. When you fix a bug, first write unit tests around the code you are fixing. This will increase your code coverage while still letting you get real work done. This is the patience part.

Every week, get a single resource (class, method, function) completely under test, i.e. with a statement and branch coverage of 100%. It's better to have 1 resource at 100% coverage than 10 resources at 10% coverage.

Here's why: You can now refactor that resource. Read Robert C. Martin's Clean Code to get ideas how code can be made better. Then get some team members together and do a refactoring session:

Make a tiny improvement (rename a variable, remove a comment, extract a sub-method), then prove that all tests are still green, then pass the keyboard on to the next guy in the room. Repeat this over and over throughout the session. Don't forget to add sweets, chips, coke or beer to those sessions - make it a fun event.

Use the session to learn about the code, what it does, and why; this will enable all in the room to support that code that they wouldn't have touched otherwise.

It also gives people an idea what they write all those unit tests for: to refactor code. Without they may perceive those unit tests as just some more useless burden. After all, it's sometimes the legacy developers, not the legacy code that need to be treated first. That's the conviction part.

Upvotes: 0

Bert Jan Schrijver
Bert Jan Schrijver

Reputation: 1531

I usually apply three levels of tests:

  1. Unit tests - low-level tests that verify the correct behaviour of small, independent units of code. These tests are typically low-level, directly call other code/api's, run fast (during build time) and can break relatively fast too when doing extensive refactoring.
  2. Integration tests - medium-level tests that verify the correct behaviour of a number of units of code together. For example, an API provided by the backend to an external system or to the front-end. These tests are typically not too low-level, operate above code level (http requests, for example), run a bit less fast than unit tests (still during build time) but break less fast since they test against the boundaries of the system (REST endpoints, for example).
  3. End-to-end tests - high level tests that test the system as a whole. For a web application, typically browser testing is used (with Selenium, for example) where a browser is controlled by the tests, connecting to a running instance of the system. These tests are pretty high level (they simulate user behaviour), run slow and not during build time (since the system needs to be deployed first).

In your case, I'd combine these types of tests. Start by making an automated regression test suite using integration tests and/or end-to-end tests. These types of test can hit a relatively large part of the system with not too much effort. When adding/changing functionality, first write one or more unit tests that verify the current state of the system. Then add/change test cases that verify the desired/new state of the system and change the system accordingly.

By the way: please reconsider the statement "writing unit tests for these projects would be too time consuming". Yes, it might be time consuming, but not writing tests at all would also be time consuming since you'd probably break functionality all the time without knowing, and find yourself needing to fix lots of issues.

Upvotes: 1

Adi Gerber
Adi Gerber

Reputation: 666

Instead of writing unit tests for everything as it is right now, I believe you would be better off writing unit tests for new code you add. You could assume that at the current state, everything works as expected; then, when you find and fix a bug, or add a new feature, or pretty much make any change to the code base - write unit tests for that new code.

Regarding other kinds of tests, you may want to consider integration tests. This answer to another SO question explains what integration tests are for and their value in comparison to unit tests.

Upvotes: 1

Related Questions