jørgen k. s.
jørgen k. s.

Reputation: 145

How to avoid congesting/stalling/deadlocking an executorservice with recursive callable

All the threads in an ExecutorService are busy with tasks that wait for tasks that are stuck in the queue of the executor service.

Example code:

ExecutorService es=Executors.newFixedThreadPool(8);
        Set<Future<Set<String>>> outerSet=new HashSet<>();
        for(int i=0;i<8;i++){
            outerSet.add(es.submit(new Callable<Set<String>>() {

                @Override
                public Set<String> call() throws Exception {
                    Thread.sleep(10000); //to simulate work
                    Set<Future<String>> innerSet=new HashSet<>();
                    for(int j=0;j<8;j++) {
                        int k=j;
                        innerSet.add(es.submit(new Callable<String>() {
                            @Override
                            public String call() throws Exception {
                                return "number "+k+" in inner loop";
                            }

                        }));
                    }
                    Set<String> out=new HashSet<>();
                    while(!innerSet.isEmpty()) {            //we are stuck at this loop because all the
                        for(Future<String> f:innerSet) {    //callable in innerSet are stuckin the queue
                            if(f.isDone()) {                //of es and can't start since all the threads
                                out.add(f.get());           //in es are busy waiting for them to finish
                            }
                        }
                    }
                    return out;
                }
            }));
        }

Are there any way to avoid this other than by making more threadpools for each layer or by having a threadpool that is not fixed in size?

A practical example would be if some callables are submitted to ForkJoinPool.commonPool() and then these tasks use objects that also submit to the commonPool in one of their methods.

Upvotes: 1

Views: 895

Answers (3)

Kayaman
Kayaman

Reputation: 73558

You should use a ForkJoinPool. It was made for this situation.

Whereas your solution blocks a thread permanently while it's waiting for its subtasks to finish, the work stealing ForkJoinPool can perform work while in join(). This makes it efficient for these kinds of situations where you may have a variable number of small (and often recursive) tasks that are being run. With a regular thread-pool you would need to oversize it, to make sure that you don't run out of threads.

With CompletableFuture you need to handle a lot more of the actual planning/scheduling yourself, and it will be more complex to tune if you decide to change things. With FJP the only thing you need to tune is the amount of threads in the pool, with CF you need to think about then vs. thenAsync as well.

Upvotes: 4

Shankha057
Shankha057

Reputation: 1371

The approach that you are suggesting which basically is based on the hypothesis that there is a possible resolution if the number of threads are more than the number of task, will not work here, if you are already allocating a single thread pool. You may try it to see it. It's a simple case of deadlock as you have stated in the comments of your code.

In such a case, use two separate thread pools, one for the outer and another for the inner. And when the task from the inner pool completes, simply return back the value to the outer.

Or you can simply create a thread on the fly, get the work done in it, get the result and return it back to the outer.

Upvotes: 0

Izzy
Izzy

Reputation: 238

I would recommend trying to decompose the work to use completion stages via CompletableFuture

CompletableFuture.supplyAsync(outerTask) .thenCompose(CompletableFuture.allOf(innerTasks)

That way your outer task doesn’t hog the execution thread while processing inner tasks, but you still get a Future that resolves when the entire job is done. It can be hard to split those stages up if they’re too tightly coupled though.

Upvotes: 1

Related Questions