Scientist
Scientist

Reputation: 1464

Optimization of Thread Pool Executor-java

I am using ThreadPoolexecutor by replacing it with legacy Thread.

I have created executor as below:

pool = new ThreadPoolExecutor(coreSize, size, 0L, TimeUnit.MILLISECONDS,
       new LinkedBlockingQueue<Runnable>(coreSize),
       new CustomThreadFactory(name),
       new CustomRejectionExecutionHandler());
pool.prestartAllCoreThreads();

Here core size is maxpoolsize/5. I have pre-started all the core threads on start up of application roughly around 160 threads.

In legacy design we were creating and starting around 670 threads.

But the point is even after using Executor and creating and replacing legacy design we are not getting much better results.

For results memory management we are using top command to see memory usage. For time we have placed loggers of System.currentTime in millis to check the usage.

Please tell how to optimize this design. Thanks.

Upvotes: 2

Views: 2434

Answers (2)

Brian Agnew
Brian Agnew

Reputation: 272427

The executor merely wraps the creation/usage of Threads, so it's not doing anything magical.

It sounds like you have a bottleneck elsewhere. Are you locking on a single object ? Do you have a single single-threaded resource that every thread hits ? In such a case you wouldn't see any change in behaviour.

Is your process CPU-bound ? If so your threads should (very roughly speaking) match the number of processing cores available. Note that each thread you create consumes memory for its stack, and if you're memory bound, then creating multiple threads won't help here.

Upvotes: 0

Gray
Gray

Reputation: 116938

But the point is even after using Executor and creating and replacing legacy design we are not getting much better results.

I am assuming that you are looking at the overall throughput from your application and you are not seeing a better performance as opposed to running each task in its own thread -- i.e. not with a pool?

This sounds like you were not being blocked because of context switching. Maybe your application is IO bound or otherwise waiting on some other system resource. 670 threads sounds like a lot and you would have been using a lot of thread stack memory but otherwise it may not have been holding back the performance of your application.

Typically we use the ExecutorService classes not necessarily because they are faster than raw threads but because the code is easier to manage. The concurrent classes take care of a lot of locking, queueing, etc. out of your hands.

Couple code comments:

  • I'm not sure you want the LinkedBlockingQueue to be limited by core-size. Those are two different numbers. core-size is the minimum number of threads in the pool. The size of the BlockingQueue is how many jobs can be queued up waiting for a free thread.

  • As an aside, the ThreadPoolExecutor will never allocate a thread past the core thread number, unless the BlockingQueue is full. In your case, if all of the core-threads are busy and the queue is full with the core-size number of queued tasks is when the next thread is forked.

  • I've never had to use pool.prestartAllCoreThreads();. The core threads will be started once tasks are submitted to the pool so I don't think it buys you much -- at least not with a long running application.

For time we have placed loggers of System.currentTime in millis to check the usage.

Be careful on this. Too many loggers could affect performance of your application more than re-architecting it. But I assume you added the loggers after you didn't see a performance improvement.

Upvotes: 3

Related Questions