Reputation: 24184
I am working on a project in which I will be having different Bundles. Let's take an example, Suppose I have 5 Bundles and each of those bundles will have a method name process
.
Now currently, I am calling the process
method of all those 5 bundles in parallel using multithread code below.
But somehow, everytime when I am running the below multithread code, it always give me out of memory exception. But if I am running it sequentially meaning, calling process method one by one, then it don't give me any Out Of memory exception.
Below is the code-
public void callBundles(final Map<String, Object> eventData) {
// Three threads: one thread for the database writer, two threads for the plugin processors
final ExecutorService executor = Executors.newFixedThreadPool(3);
final Map<String, String> outputs = (Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
executor.submit(new Runnable () {
public void run() {
try {
final Map<String, String> response = entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
}
Below is the exception, I am getting whenever I am running above Multithreaded code.
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt' in response to an event
UTE430: can't allocate buffer
UTE437: Unable to load formatStrings for j9mm
JVMDUMP010I Java dump written to S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt
JVMDUMP032I JVM requested Snap dump using 'S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc' in response to an event
UTE001: Error starting trace thread for "Snap Dump Thread": -1
JVMDUMP010I Snap dump written to S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
ERROR: Bundle BullseyeModellingFramework [1] EventDispatcher: Error during dispatch. (java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12)
java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175302.12608.0005.txt' in response to an event
I am using JDK1.6.0_26
as the installed JRE's in my eclipse.
Upvotes: 4
Views: 3929
Reputation: 20614
Each call of callBundles()
will create a new threadpool by creating an own executor. Each thread has its own stack space! So if you say you start the JVM, the first call will create three threads with a sum of 3M heap (1024k is the default stack size of a 64-bit JVM), the next call another 3M etc. 1000 calls/s will need 3GB/s!
The second problem is you never shutdown()
the created executor services, so the thread will live on until the garbage collector removes the executor (finalize()
also call shutdown()
). But the GC will never clear the stack memory, so if the stack memory is the problem and the heap is not full, the GC will never help!
You need to use one ExecutorService
, lets say with 10 to 30 threads or a custom ThreadPoolExecutor
with 3-30 cached threads and a LinkedBlockingQueue
. Call shutdown()
on the service before your application stops if possible.
Check the physical RAM, load and response time of your application to tune the parameters heap size, maximum threads and keep alive time of the threads in the pool. Have a look on other locking parts of the code (size of a database connection pool, ...) and the number of CPUs/cores of your server. An staring point for a thread pool size may be number of CPUs/core plus 1., with much I/O wait more become useful.
Upvotes: 2
Reputation: 22292
The main problem is that you aren't really using the thread pooling properly. If all of your "process" threads are of equal priority, there's no good reason not to make one large thread pool and submit all of your Runnable tasks to that. Note - "large" in this case is determined via experimentation and profiling: adjust it until your performance in terms of speed and memory is what you expect.
Here is an example of what I'm describing:
// Using 10000 purely as a concrete example - you should define the correct number
public static final LARGE_NUMBER_OF_THREADS = 10000;
// Elsewhere in code, you defined a static thread pool
public static final ExecutorService EXECUTOR =
Executors.newFixedThreadPool(LARGE_NUMBER_OF_THREADS);
public void callBundles(final Map<String, Object> eventData) {
final Map<String, String> outputs =
(Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
// "Three threads: one thread for the database writer,
// two threads for the plugin processors"
// so you'll need to repeat this future = E.submit() pattern two more times
Future<?> processFuture = EXECUTOR.submit(new Runnable() {
public void run() {
final Map<String, String> response =
entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
}
}
// Note, I'm catching the exception out here instead of inside the task
// This also allows me to force order on the three component threads
try {
processFuture.get();
} catch (Exception e) {
System.err.println("Should really do something more useful");
e.printStackTrace();
}
// If you wanted to ensure that the three component tasks run in order,
// you could future = executor.submit(); future.get();
// for each one of them
}
For completeness, you could also use a cached thread pool to avoid repeated creation of short-lived Threads. However, if you're already worried about memory consumption, a fixed pool might be better.
When you get to Java 7, you might find that Fork-Join is a better pattern than a series of Futures. Whatever fits your needs best, though.
Upvotes: 1