Mark Estrada
Mark Estrada

Reputation: 9191

Unit testing an executor callable

I have a controller class like this that passes some executor into an instance of runnable. This isn't how it works but just for simplicity I did it like this.

package com.test.executor;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class Controller {
    private ExecutorService executor;

    public Controller() {
        executor = Executors.newCachedThreadPool();
    }

    public void doRun() {
        MyRunner runner = new MyRunner(executor);
        Thread myRunner = new Thread(runner);
        myRunner.start();
    }

    public static void main(String[] args) {
        new Controller().doRun();
    }
}

The runner receives the instance of the executor and then passes certain callables. Now, the callables are a bit diverse as some callables access the database/call some web service/file system

I am having some issues on how to write a proper JUnit for this case. I wanted to have as much code coverage as possible.

package com.test.executor;

import java.util.Collections;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

public class MyRunner implements Runnable {

    private ExecutorService executor;

    public MyRunner(ExecutorService executor) {
        this.executor = executor;
    }

    @Override
    public void run() {
        // TODO Auto-generated method stub
        Future<Boolean> ret1 = executor.submit(new SampleCall1());

        try {
            ret1.get(5, TimeUnit.MINUTES);
        } catch (InterruptedException | ExecutionException | TimeoutException e) {
            e.printStackTrace();
        }

        // Do other things

        Future<List<String>> ret2 = executor.submit(new SampleCall2());

        try {
            ret2.get(5, TimeUnit.MINUTES);
        } catch (InterruptedException | ExecutionException | TimeoutException e) {
            e.printStackTrace();
        }

        // Do other things

        Future<Long> ret3 = executor.submit(new SampleCall3());

        try {
            ret3.get(5, TimeUnit.MINUTES);
        } catch (InterruptedException | ExecutionException | TimeoutException e) {
            e.printStackTrace();
        }
    }

    public static class SampleCall1 implements Callable<Boolean> {

        @Override
        public Boolean call() throws Exception {
            // Sample Return Only
            // This will call JSON web service
            return true;
        }

    }

    public static class SampleCall2 implements Callable<List<String>> {

        @Override
        public List<String> call() throws Exception {
            // Sample Return Only
            // This will call Database
            return Collections.EMPTY_LIST;
        }

    }

    public static class SampleCall3 implements Callable<Long> {

        @Override
        public Long call() throws Exception {
            // Sample Return Only
            // This will access some file system
            return 1L;
        }

    }

}

My question is what is the proper way to write a unit test for this. I am gathering some advice on how I can unit test this class? I am not sure what to mock in my junit/mockito instance. Should I mock each callable? And then create a test case for MyRunner.

I worry about the dependency..as I am connecting into a database/a web service/and file system so I would like to ask for some advice.

UPDATE 2

package com.test.executor;

import java.util.Collections;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

public class MyRunner implements Runnable {

    private ExecutorService executor;

    public MyRunner(ExecutorService executor) {
        this.executor = executor;
    }

    @Override
    public void run() {
        executeTasks1();
        // Do other things

        executeTasks2();
        // Do other things


        executeTasks3();

    }

    public boolean executeTasks1(){
        // TODO Auto-generated method stub
        Future<Boolean> ret1 = executor.submit(new SampleCall1());

        try {
            return ret1.get(5, TimeUnit.MINUTES);
        } catch (InterruptedException | ExecutionException | TimeoutException e) {
            e.printStackTrace();
        }

    }

    public List<String> executeTasks2(){
        Future<List<String>> ret2 = executor.submit(new SampleCall2());

        try {
            return ret2.get(5, TimeUnit.MINUTES);
        } catch (InterruptedException | ExecutionException | TimeoutException e) {
            e.printStackTrace();
        }
    }

    public Long executeTasks3(){
        Future<Long> ret3 = executor.submit(new SampleCall3());

        try {
            return ret3.get(5, TimeUnit.MINUTES);
        } catch (InterruptedException | ExecutionException | TimeoutException e) {
            e.printStackTrace();
        }
    }

    public static class SampleCall1 implements Callable<Boolean> {

        @Override
        public Boolean call() throws Exception {
            // Sample Return Only
            // This will call JSON web service
            return true;
        }

    }

    public static class SampleCall2 implements Callable<List<String>> {

        @Override
        public List<String> call() throws Exception {
            // Sample Return Only
            // This will call Database
            return Collections.EMPTY_LIST;
        }

    }

    public static class SampleCall3 implements Callable<Long> {

        @Override
        public Long call() throws Exception {
            // Sample Return Only
            // This will access some file system
            return 1L;
        }

    }

}

Upvotes: 4

Views: 9998

Answers (2)

Roland
Roland

Reputation: 23272

Don't spend too much time on code coverage. Be pragmatic in some way. Test what is worth testing. Increase your test quality, not quantity. Some side-note regarding metrics: 100% line coverage does not count as much as a high branch coverage and getting a high branch coverage may cost you a lot of time and even then: you may not need to take every possible route. PIT (mutation testing) is a helpful "tool" which checks how good your tests actually are by changing the code under test. But use that information as a guide, not as a measure to implement even more tests. Don't exaggerate! (maybe if you are developing a library, that you don't want to change that often anymore or which you want to make rock-solid (and you have the spare-time), you can, but otherwise I wouldn't)

There was a time where I tested everything very accurately. The problem: as soon as changes kick in (like: today we require X, tomorrow it's Y) many of your tests then fail. A lot of rework is required then. If you test the most appropriate, as soon as the requirements change dramatically, you can focus on the important, which should take much less time.

Looking at the code you presented I would probably write a test (class) for each Callable-implementation, just so that it is ensured that they do what I want (this may lead to extraction of the Callable-classes as a side-effect). For the controller and the runner I am not so sure yet... The MyRunner seems already obsolete to me... but maybe it's not. So probably I would only test the controller...

Regarding mocking: I started omitting mocks whenever possible. Most of the times I also add integration tests and I like to know how the system acts as a whole and whether it works as expected. I have seen a lot of tests where there were so many mocks in place, that by changing any code, no unit tests have failed, even though some should have in my opinion. I've also seen lots of unit tests were the unit tests in fact looked like integration tests, but there were just mocks everywhere. What does the mock then help in the first place? Setting up the mock alone probably took way too much time. But that is again my opinion. So even though many people like the test pyramid to have a broad unit test bottom, I see it more pragmatic and move some of the tests to the integration test "layer" instead of using tons of mocks. In the end it's also a question of availability and speed. You want your tests to give fast results, still you want the results to be of most value (which mocks only give partially).

Upvotes: 4

Karol Dowbecki
Karol Dowbecki

Reputation: 44962

Writing a huge MyRunnableTest test classes is as much of a code smell as writing a huge MyRunnable production class. Since each of your Callables is diverse and accesses different I/O resource you want to test each of them separately. The actual approach e.g. unit test for file system operations or integration test with embedded H2 database should be select on case by case basis.

Extracting Callables should leave you with a smaller MyRunnable class. You can now split run() into smaller methods or classes and test them separately. At this point you can mock ExecutorService.

You should test the ExecutorService object in the test for Controller class where you have the actual object created.

Coverage is just a tool that helps you measures the quality of your tests. High coverage is not a goal in itself. If you take it to the extreme you can easily have 100% code coverage and not a single assertion in your tests.

You can apply other techniques like Test Driven Development and tools like PIT mutation testing to improve your tests. However you should start by making your production code easy to test.

Upvotes: 0

Related Questions