pacoverflow
pacoverflow

Reputation: 3881

Repeated timeouts with Java Future causes JVM to run out of memory

Our Java application is having an issue where it blocks indefinitely when it tries to write to a log file located on a NFS share and the NFS share is down.

I was wondering whether we could solve this problem by having a Future execute the write operation with a timeout. Here is a little test program I wrote:

public class write_with_future {
    public static void main(String[] args) {
        int iteration=0;
        while (true) {
            System.out.println("iteration " + ++iteration);

            ExecutorService executorService = Executors.newSingleThreadExecutor();
            Future future = executorService.submit(new Runnable() {
                public void run() {
                    try {
                        Category fileLogCategory = Category.getInstance("name");
                        FileAppender fileAppender = new FileAppender(new SimpleLayout(), "/usr/local/app/log/write_with_future.log");
                        fileLogCategory.addAppender(fileAppender);
                        fileLogCategory.log(Priority.INFO, System.currentTimeMillis());
                        fileLogCategory.removeAppender(fileAppender);
                        fileAppender.close();
                    }
                    catch (IOException e) {
                        System.out.println("IOException: " + e);
                    }
                }
            });

            try {
                future.get(100L, TimeUnit.MILLISECONDS);
            }
            catch (InterruptedException ie) {
                System.out.println("Current thread interrupted while waiting for task to complete: " + ie);
            }
            catch (ExecutionException ee) {
                System.out.println("Exception from task: " + ee);
            }
            catch (TimeoutException te) {
                System.out.println("Task timed out: " + te);
            }
            finally {
                future.cancel(true);
            }

            executorService.shutdownNow();
        }
    }
}

When I ran this program with a maximum heap size of 1 MB, and the NFS share was up, this program was able to execute over 1 million iterations before I stopped it.

But when I ran the program with a maximum heap size of 1 MB, and the NFS share was down, the program executed 584 iterations, getting a TimeoutException each time, and then it failed with a java.lang.OutOfMemoryError error. So I am thinking that even though future.cancel(true) and executorService.shutdownNow() are being called, the executor threads are blocked on the write and not responding to the interrupts, and the program eventually runs out of memory.

Is there any way to clean up the executor threads that are blocked?

Upvotes: 0

Views: 499

Answers (1)

Stephen C
Stephen C

Reputation: 718788

If appears that Thread.interrupt() does not interrupt threads blocked in an I/O operation on an NFS file. You might want check the NFS mount options, but I suspect that you won't be able to fix that problem.

However, you could certainly prevent it from causing OOME's. The reason you are getting those is that you are not using ExecutorServices as they are designed to be used. What you are doing is repeatedly creating and shutting down single thread services. What you should be doing is creating on instance with a bounded thread pool and using that for all of the tasks. If you do it that way, if one of the threads takes a long time ... or is blocked in I/O ... you won't get a build-up of threads, and run out of memory. Instead, the backlogged tasks will sit in the ExecutorService's work queue until one of the worker thread unblocks.

Upvotes: 1

Related Questions