Andrew Grimm
Andrew Grimm

Reputation: 81600

Version control and test-driven development

The standard process for test-driven development seems to be to add a test, see it fail, write production code, see the test pass, refactor, and check it all into source control.

Is there anything that allows you to check out revision x of the test code, and revision x-1 of the production code, and see that the tests you've written in revision x fail? (I'd be interested in any language and source control system, but I use ruby and git)

There may be circumstances where you might add tests that already pass, but they'd be more verification than development.

Upvotes: 7

Views: 1349

Answers (6)

Jay
Jay

Reputation: 57959

If you git commit after writing your failing tests, and then again when they are passing, you should at a later time be able to create a branch at the point where the tests fail.

You can then add more tests, verify that they also fail, git commit, git merge and then run the tests with the current code base to see if the work you already did will cause the test to pass or if you now need to do some more work.

Upvotes: 0

Nat
Nat

Reputation: 9951

"There may be circumstances where you might add tests that already pass, but they'd be more verification than development."

In TDD you always watch a test fail before making it pass so that you know it works.

As you've found, sometimes you want to explicitly describe behaviour that is covered by code you've already written but when considered from outside the class under test is a separate feature of the class. In that case the test will pass.

But, still watch the test fail.

Either write the test with an obviously failing assertion and then fix the assertion to make it pass. Or, temporarily break the code and watch all affected tests fail, including the new one. And then fix the code to make it work again.

Upvotes: 1

Yishai
Yishai

Reputation: 91921

Simply keep your tests and code in seperate directories, and then you can check out one version of the tests and another of the code.

That being said, in a multi-developer environment you generally don't want to be checking in code where the tests fail.

I would also question the motivation for doing this? If it is to "enforce" the failing test first, then I would point you to this comment from the father of (the promotion of) TDD.

Upvotes: 1

Nicolas Dumazet
Nicolas Dumazet

Reputation: 7231

Is there anything that allows you to check out revision x of the test code, and revision x-1 of the production code, and see that the tests you've written in revision x fail?

I think that you are looking for the keyword continuous integration. There are many tools that actually are implemented as post-commit hooks in version control systems (aka something that runs on servers/central repository after each commit): for example, they will run your unit tests after each commits, and email the committers if a revision introduces a regression.

Such tools are perfectly able to detect which tests are new and never passed from old tests that used to pass and that currently fail due to a recent commit, which means that using TDD and continuous integration altogether is just fine: you will probably be able to configure your tools not to scream when a new failing test is introduced, and to complain only on regressions.

As always, I'll direct you to Wikipedia for a generic introduction on the topic. And a more detailed, quite famous resource would be the article from Martin Fowler

Upvotes: 0

Andy White
Andy White

Reputation: 88415

If you keep your production and test code in separate versioning areas (e.g. separate projects/source trees/libraries/etc.), most version control systems allow you to checkout previous versions of code and rebuild them. In your case, you could checkout the x-1 version of production code, rebuild it, then run your test code against the newly built/deployed production deployable.

One thing that might help would be to tag/label all of your code when you do a release, so that you can easily fetch an entire source tree for a previous version of your code.

Upvotes: 0

John Saunders
John Saunders

Reputation: 161821

A couple of things:

  1. After refactoring the test, you run the test again
  2. Then, you refactor the code, then run the test again
  3. Then, you don't have to check in right away, but you could

In TDD, there is no purpose in adding a test that passes. It's a waste of time. I've been tempted to do this in order to increase code coverage, but that code should have been covered by tests which actually failed first.

If the test doesn't fail first, then you don't know if the code you then add fixes the problem, and you don't know if the test actually tests anything. It's no longer a test - it's just some code that may or may not test anything.

Upvotes: 1

Related Questions