captainskippah
captainskippah

Reputation: 1559

Hard-coded vs Soft-coded expected values for Integration Testing

I understand that in Unit Test, hard-coded is preferred as we don't want to write much code as possible.

For Integration Testing however, is the same principle still applied? For a context, here's a common scenario:

Here's an imaginary code on how both approach might look like:

// Arrange
$note = new Note('note123', 'John', 'example message');

$this->noteRepository->save($note);

// Act
$response = $this->json('GET', '/api/notes');

// Assert
$response->seeJsonEquals([
   'id' => 'note123',
   'author' => 'John',
   'message' => 'example message'
]);

// Arrange
$note = new Note('note123', 'John', 'example message');

$this->noteRepository->save($note);

// Act
$response = $this->json('GET', '/api/notes');

// Assert
$noteSerializer = new NoteSerializer();

$response->seeJsonEquals($noteSerializer->serialize($note));

The only catch for the soft-coded approach is that if the Serializer is the problem, the test will still pass because both the controller and expected value uses it.

However, we can solve that by creating another test for the Serializer.

We might have lots of mini-tests but I think it'll us save lots of time compared to hard-coding. If we changed our response structure, the hard-coded tests will need some changes as well but the soft-coded one needs to change in its own test only.

I might be missing something but I already tried to Google it and all I see are always about Unit Tests so thought of asking if same principle is still applied in Integration Tests as well

Upvotes: 1

Views: 1583

Answers (2)

Fabio
Fabio

Reputation: 32445

In your particular example using of NoteSerializer did not make sense, because your assertions build with same code as implementation.

Would you call this test valuable?

// Arrange
$original = 42;
$expected = $original + 100;

// Act
$actual = $original + 100;

// Assert
this-> assertEquals($expected, $actual);

Problem with using NoteSerializer for building expected value is that, as you already noticed, if serializer is broken - tests will remain green.

Instead you can deserialize received response into a class and compare it to original $note

Upvotes: 0

VoiceOfUnreason
VoiceOfUnreason

Reputation: 57367

TL;DR: yes, it makes perfect sense to have integration tests that assume that other testing strategies are sharing the responsibility for detecting mistakes.

I think you'll find that there are two different ideas here, that are getting mixed.

One problem is independent verification. There are lots of tests that you can run to demonstrate that a given solution is internally consistent, but that's not equivalent to to demonstrating that a given solution is correct. The latter usually requires querying the test subject for data and then doing an independent evaluation.

UltimateAnswer lifeTheUniverseAndEverything = deepThought()

// Compare this
assertEquals(new UltimateAnswer(42), lifeTheUniverseAndEverything);

// to this
assertEquals(42, lifeTheUniverseAndEverything.toInt());

What counts as independent? I consider it to be a fuzzy line -- if we had enough tests in place to have some arbitrary number of nines confidence in UltimateAnswer::equals, then it might be fine to treat that verification as independent. On the other hand, I've been burned at least twice by using domain agnostic primitives to "independently" verify that things worked, only to discover I was actually performing a dependent verification, and the tests was failing to catch the bug I expected it to.

A second problem is over fitting -- it is often the case that a number of distinguishable behaviors may all be satisfactory. Example: what should be the result of List.shuffle()? If the tests are intended to describe your requirements, then they are going to be more forgiving than tests which document example behaviors.

Strictly fitted tests are fantastic when your primary activity is refactoring, and you are trying to verify that the change that you made does indeed preserve the precise behavior of your system. They can be lousy when testing a new system with a small core deviation in behavior that shows up everywhere (consider tests that verify output strings, after the date formatting requirements get changed).

To my mind, neither of these concerns is particularly different for "Integration Tests" vs "Unit Tests". Admittedly, part of the problem is that its never particularly clear which definition of these ideas another person is working from.

In most cases, the different kinds of tests have different trade offs. We want our verification to be cost effective. So we're probably going to have a layered testing strategy, where the kinds of checks we implement depend on the context.

Upvotes: 3

Related Questions