Amen
Amen

Reputation: 1633

Testing Unit's Efficiency in TDD

Let's say we need a sort function and want to ensure that it is implemented in O(nlogn) rather than O(n^2).

Using test driven development, is there a systematic way to test the efficiency of implementation of this function?

According to Wikipedia, Testing implementation details is considered an anti-pattern in Test Driven Development, does this prevent TDD from any attempt to check the efficiency of the code satisfying a requirement? Or is there a systematic way for doing so?

Upvotes: 0

Views: 59

Answers (2)

Jon Reid
Jon Reid

Reputation: 20980

Instead of TDD, you can use test-after:

  • Inject a counter that measures number of operations
  • Run the algorithm for given input
  • Confirm that the count is less than a threshold

This would prevent regressions in the number of operations. (Keep in mind that it's no guarantee of real-world performance.)

Upvotes: 0

VoiceOfUnreason
VoiceOfUnreason

Reputation: 57239

It's not really the sweet spot of TDD -- keep in mind that the motivation for TDD is not testing (although that is a nice side effect), but design, which is to say making the code easy to change.

Part of the TDD ritual is running tests frequently during the development cycle; tests that distract from the development flow (for example, by taking a long time to run) are out of bounds. That's not to say that you can't have those tests; one of the arguments in favor of TDD is that it ensures that you have testable code. But you wouldn't normally be expecting to run tests that require significant wall clock time during the red/green/recycle ritual.

Furthermore, tests that are tightly coupled to an implementation are a real drag when the implementation is unstable. You lose credibility when the tests interfere with changing the encapsulated design in the code.

Sometimes, you can introduce an observability requirement, so that from outside the system you can get a count of how often a critical section is invoked. And as long as that critical section is being used by the system, then maybe you can use the counts as evidence, and estimate whether or not the implementation scales the way you expect.

In the case of sort, that might mean a design where the comparison function is a configurable dependency, and in the test we provide an implementation that counts how often it is invoked.

But that does introduce some coupling -- at that point what you are measuring is whether or not your method is invoked, not whether or not the test subject produces the right answer. In some cases, that's fine. In other cases, that's over coupling. I don't know of any simple heuristic that you can use to distinguish the two cases without trying the experiment and getting burned when the over coupling happens.

Upvotes: 1

Related Questions