Java Future重复超时导致JVM内存不足

我们的Java应用程序存在一个问题,当它尝试写入位于NFS共享上的日志文件并且NFS共享已关闭时,它会无限期阻塞。

我想知道我们是否可以通过使Future超时执行写操作来解决此问题。这是我编写的一些测试程序:

public class write_with_future {
    public static void main(String[] args) {
        int iteration=0;
        while (true) {
            System.out.println("iteration " + ++iteration);

            ExecutorService executorService = Executors.newSingleThreadExecutor();
            Future future = executorService.submit(new Runnable() {
                public void run() {
                    try {
                        Category fileLogCategory = Category.getInstance("name");
                        FileAppender fileAppender = new FileAppender(new SimpleLayout(), "/usr/local/app/log/write_with_future.log");
                        fileLogCategory.addAppender(fileAppender);
                        fileLogCategory.log(Priority.INFO, System.currentTimeMillis());
                        fileLogCategory.removeAppender(fileAppender);
                        fileAppender.close();
                    }
                    catch (IOException e) {
                        System.out.println("IOException: " + e);
                    }
                }
            });

            try {
                future.get(100L, TimeUnit.MILLISECONDS);
            }
            catch (InterruptedException ie) {
                System.out.println("Current thread interrupted while waiting for task to complete: " + ie);
            }
            catch (ExecutionException ee) {
                System.out.println("Exception from task: " + ee);
            }
            catch (TimeoutException te) {
                System.out.println("Task timed out: " + te);
            }
            finally {
                future.cancel(true);
            }

            executorService.shutdownNow();
        }
    }
}

当我以最大堆大小1 MB运行该程序,并且NFS共享增加时,该程序能够在停止之前执行超过一百万次迭代。

But when I ran the program with a maximum heap size of 1 MB, and the NFS share was down, the program executed 584 iterations, getting a TimeoutException each time, and then it failed with a java.lang.OutOfMemoryError error. So I am thinking that even though future.cancel(true) and executorService.shutdownNow() are being called, the executor threads are blocked on the write and not responding to the interrupts, and the program eventually runs out of memory.

有什么方法可以清除被阻塞的执行程序线程?

评论
  • 哼,涐吐
    哼,涐吐 回复

    If appears that Thread.interrupt() does not interrupt threads blocked in an I/O operation on an NFS file. You might want check the NFS mount options, but I suspect that you won't be able to fix that problem.

    However, you could certainly prevent it from causing OOME's. The reason you are getting those is that you are not using ExecutorServices as they are designed to be used. What you are doing is repeatedly creating and shutting down single thread services. What you should be doing is creating on instance with a bounded thread pool and using that for all of the tasks. If you do it that way, if one of the threads takes a long time ... or is blocked in I/O ... you won't get a build-up of threads, and run out of memory. Instead, the backlogged tasks will sit in the ExecutorService's work queue until one of the worker thread unblocks.