Scientist
Scientist

Reputation: 1464

Thread Pool Executor

I am working on changing legacy design with Thread Pool Executor. The details are as follows:-

Legacy:- In case of legacy design 600 threads are created at the time of application start up. and are placed in various pools, which are then picked up when required and task is assigned to the corresponding thread.

New:- In new design i replaced the thread pool by executor service as

 ThreadPoolExecutor thpool = new ThreadPoolExecutor(coreSize,poolsize,...);

What i am observing is that in case of Executor no threads are created at the time of start up. They are created when request is fired from the client. As a result of which threads created in memory are quite less as compared to previous one.

But what my question is that is it right way because Thread creation is also a overhead which is happening at the time of call is triggered.

Please tell which is more heavy, process Thread creation at the time of call from client or Idle threads being there in memory as per legacy approach.

Also suggest which Executor pool to use in order to get best results in terms of performance.

Upvotes: 0

Views: 1313

Answers (3)

Ralf H
Ralf H

Reputation: 1474

600 sounds like a lot. You may want to lower this somewhat, to the number of available processors. Or, if the threads end up waiting a lot, factor in the average load incurred be the TPE's Runnables, if your threads are not 100% CPU-bound. Say you have nCPUs=Runtime.getRuntime().availableProcessors() and loadFactor as your average load on your threads (as observed in testing or, better yet, continuous monitoring). Then you would use nThreads=nCPUs/loadFactor and hope that loadFactor is not zero.

You can also use a smaller coresize and larger poolsize, but then you need a bounded queue. The TPE in this case would start up new threads if the queue is full, until you reach poolsize. If the majority of your jobs is handled within coresize, thread creation should not be too frequent and overhead should not be a concern. But even this may block eventually when TPE has its maximum threads running.

If your jobs are some kind of incoming work, like read from a socket of other connection that must not block, you can create an unbounded intermediate queue to free inbound processing from eventually blocking at your TPE, and use another thread to submit jobs from intermediate to the TPE queue.

Upvotes: 0

Salah
Salah

Reputation: 8657

you can call :

thpool.prestartAllCoreThreads();

or

thpool.prestartCoreThread();

This two methods to Starts a core thread (Threads), causing it to idly wait for work.

But i recommended do not do this, it's will be head over on your resources.

Upvotes: 0

Aubin
Aubin

Reputation: 14853

to fix 600 threads at start-up, try to use java.util.concurrent.Executors.newFixedThreadPool( 600 );

Creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue. At any point, at most nThreads threads will be active processing tasks. If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available. If any thread terminates due to a failure during execution prior to shutdown, a new one will take its place if needed to execute subsequent tasks. The threads in the pool will exist until it is explicitly shutdown.

As you can read, the documentation doesn't tell us if the threads are started immediately or on demand.

If you want absolutely the 600 threads started at start-up you may post 600 empty tasks:

for( int i = 0; i < 600; ++i ) {
   executor.submit( new Runnable(){public void run(){/**/}});
}

Upvotes: 1

Related Questions