javacomelava
javacomelava

Reputation: 91

TDD, Unit Test and mocks injection: What about the Single Responsability principle?

I'm writing a Unit Test for a Class (CustomHttpClient) that internally uses a library (java.net.http.HttpClient) to make an external HTTP call.

Being it a Unit Test, I would mock the internal HttpClient call (using a mocking framework like Mockito) returning a mocked response.

The Class to test has been designed to initialize the client library (that I want to mock) internally:

public class CustomHttpClient

    public HttpResonse doCall() {

       // Build Http Client
       HttpClient client = HttpClient.newBuilder().build();
       
       // Build request
       HttpRequest request = HttpRequest.newBuilder()
                .uri("http://myserviceuri")
                .GET()
                .header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
                .build();

       // Do HTTP call --> TO MOCK WHEN UNIT TESTING
       return client.send(request,HttpResponse.BodyHandlers.ofString());
    }
}

For the way the class is designed, it is impossible to "replace" the HttpClient real instance with a mocked one during tests. Looking at the TDD, the used design effectively seems to break the principle that states that a class should be designed to be easily testable.

TDD suggests using Dependency injection to inject dependencies so that it can be easy to inject mocks when testing

According to this TDD principle, I see two different approaches:

  1. Passing the external library in the CustomHttpClient constructor
  2. Creating a Setter method to inject the external library
public class CustomHttpClient

    private HttpClient client;

    // External library injected on object creation
    public CustomHttpClient(HttpClient client){
       this.client = client;
    }

    public HttpResonse doCall() {
      
       // Build request
       HttpRequest request = HttpRequest.newBuilder()
                .uri("http://myserviceuri")
                .GET()
                .header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
                .build();

       // Do HTTP call --> TO MOCK WHEN UNIT TESTING
       return client.send(request,HttpResponse.BodyHandlers.ofString());
    }
}

Anyway, both approaches seem to me to break the Single Responsibility principles and, at least, to be not elegant.

Why a CustomHttpClient consumer should have the responsibility of injecting a dependency that could be internally handled by the module itself? Is it acceptable for testing purposes?

Maybe I'm missing something and I really would appreciate any help :-)

P.S.: I also know that exists some Testing frameworks, like Powermocks, that are able to handle this particular use case, anyway I'm more interested in the correct design to use and why.

Thank you

Upvotes: 1

Views: 63

Answers (2)

VoiceOfUnreason
VoiceOfUnreason

Reputation: 57397

Maybe I'm missing something

A little bit; but that's not your fault. The Literature Sucks[tm].

Why a "CustomHttpClient" consumer should have the responsibility of injecting a dependency that could be internally handled by the module itself?

The rule is that the flexibility to support controlled experiments is something that you need to build into the design somewhere. Nobody says that you have to do that naively.

(What follows will make more sense if you are familiar with "information hiding"; see Parnas 1971).

Part of the trap that you have run into here is the assumption that the client code should be directly coupled to the constructor of the CustomHttpClient code. That's not necessarily so; you could instead introduce a level of indirection to "hide" the details.

For example, you might use a "named constructor":

public class CustomHttpClient {
   public static CustomHttpClient live() {
       return new CustomHttpClient();
   }

When you later decide that you need to be passing additional arguments to the constructor, code that is directly coupled to CustomHttpClient::CustomHttpClient needs to change, but code that is directly coupled to CustomHttpClient::live does NOT need to change.

The point here is not that you should do it this way, specifically, but rather that we have techniques available for limiting the blast radius of the design decisions we make, when we think that limiting the blast radius is important.


Another issue that you may be running into is that you may be mocking the wrong thing. Consider a slight tweak to your original design:

public class CustomHttpClient

    public HttpResonse doCall() {
       return this.verb(
           this.request
       );
    }

    HttpRequest request() {
        return  HttpRequest.newBuilder()
                .uri("http://myserviceuri")
                .GET()
                .header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
                .build();
    }

    HttpResponse verb(HttpRequest request) {
       HttpClient client = HttpClient.newBuilder().build();
       return client.send(request,HttpResponse.BodyHandlers.ofString());
    }
}

In this design CustomHttpClient::doCall is "so simple that there are obviously no deficiencies", so maybe you don't need unit tests at that grain. CustomHttpClient::verb is similarly very simple. Perhaps the only code that you actually need to "unit test" is CustomHttpClient::request?

In the event that you need to "unit test" CustomHttpClient::doCall, the thing to recognize is that doCall isn't directly coupled to HttpClient - it is directly coupled to CustomHttpClient::verb, and that's the element that needs substitution.

In other words, we can play the information hiding game again. Here's a rough sketch

public interface HttpSeam {
    HttpResponse verb(HttpRequest request);
}

class LiveHttpSeam implements HttpSeam {
    HttpResponse verb(HttpRequest request) {
       HttpClient client = HttpClient.newBuilder().build();
       return client.send(request,HttpResponse.BodyHandlers.ofString());
    }
}

class CustomHttpClient {
    public static CustomHttpClient live() {
        return CustomHttpClient.using(new LiveHttpSeam());
    }

    static CustomHttpClient using(HttpSeam seam) {
        return new CustomHttpClient(seam);
    }


    HttpSeam seam;

    CustomHttpClient(HttpSeam seam) {
       this.seam = seam;
    }

    public HttpResonse doCall() {
       return this.verb(
           this.request
       );
    }

    HttpRequest request() {
        return  HttpRequest.newBuilder()
                .uri("http://myserviceuri")
                .GET()
                .header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
                .build();
    }

    HttpResponse verb(HttpRequest request) {
       return seam.verb(request);
    }

So for your tests, you "mock" the seam, and pass that mock to the appropriate named constructor

    HttpSeam mock = ...;
    CustomHttpClient testSubject = CustomHttpClient.using(mock);
    ...

Ruth Malan observed:

Design is what we do when we want more of what we want than we would get by just doing it.

Part of the point of TDD is that we aren't bolting testing directly onto whatever design just happened, but instead that we are deliberately making design trade offs to get more of the testability that we want.

Upvotes: 1

Mark Seemann
Mark Seemann

Reputation: 233487

TDD suggests using Dependency injection to inject dependencies so that it can be easy to inject mocks when testing

That's one option. More broadly, test-driven development is a programming process where you interchangeably first write a test, then write a bit of implementation code, and then again reflect on what you've learned; most commonly reflected by the red-green-refactor cycle.

Since we usually wish to get rid of non-deterministic behaviour in automated tests, we need to consider how to control such parts of the system. Replacing a non-deterministic dependency with a Test Double is a common way to do that. It's not the only way, though.

(Since the OP lists Java code, I'll assume that, say, a Haskell or F# example isn't warranted, but in functional programming, one would typically address such concerns by designing around pure functions, which are intrinsically testable. You'd typically have a pure function that creates an immutable HTTP request value, and perhaps another pure function that takes as input an immutable HTTP response value. Those pure functions are trivially testable, and you'd normally just compose them with the non-deterministic action (the actual HTTP request) using standard combinators. Such combinators are usually defined in general-purpose base libraries, so it's not something that you normally test. If you're interested, here's an example: Refactoring registration flow to functional architecture.)

That digression aside, in the rest of this answer, I'll assume that an object-oriented solution is of more interest.

Even if you want to replace non-deterministic behaviour with something that you can control from a test, Dependency Injection (DI) isn't the only option. You can instead use design patterns such as Template Method or Factory Method.

If, on the other hand, you decide to use DI, I recommend using Constructor Injection over Property Injection. In my book on DI, I explain in more detail why Constructor Injection is preferable, but in short, it's the simplest way to ensure that the object in question maintains its invariants.

Anyway, both approaches seem to me to break the Single Responsibility principles

That's a common misconception related to the Single Responsibility Principle (SRP), which put succinctly says that a class should have a single reason to change. If you inject an HTTPClient interface into an object, you can vary its behaviour without changing it. This is actually one way to follow the open-closed principle (OCP).

... and, at least, to be not elegant.

Indeed, this kind of design has met much criticism for leaving test-induced damage in its wake. I agree with the criticism, but not with the conclusion.

Why a "CustomHttpClient" consumer should have the responsibility of injecting a dependency that could be internally handled by the module itself? Is it acceptable for testing purposes?

You can't always predict the effects of design decisions from the outset, but as a general observation, TDD improves reusability. By 'forcing' a piece of code to work both in a test context and its intended production context, you've already demonstrated that it can work in double the number of scenarios than first envisioned. Perhaps that variability proves useful later.

For example, you may later realize that you don't want to use a 'raw' HTTP client, because you want to use the Decorator pattern to wrap it with a logger. And you may later want to wrap it again with another Decorator that implements a Circuit Breaker.

You get that kind of flexibility from the OCP, and testability becomes just a by-product of a SOLID design.

Upvotes: 3

Related Questions