Reputation: 14452
I working on a service that will process large number of requests, each in a different file. The challenge is that processing requires both local processing (cpu), and remote (database) processing. Database has Hugh capacity. Database work is 30-80% of processing (Dynamic, can not be calculated up front)
The default commonPool used for completionService.completeAsync uses a pool with (processors-1) threads. Given that large portion of the processing is waiting for database work, the default commonPool underutilize local Machine resources.
I am believe using a custom executor that will conditionally pause if local load on the machine is high can improve situation. Now sure how to build such executor. Any advice ? Any existing library providing such code exists ?
For readers familiar with gnu make - equivalent to make ability to limit concurrent processing based on load.
Upvotes: 1
Views: 71
Reputation: 4965
I'm not sure if I understand your question correctly, but I'll take a shot at answering anyway. So what I understand is you run a large number of requests on a fixed size thread pool, and you find your CPU is underutilized because often times these threads are blocked waiting for a response from the database.
So generally speaking, I think what you want is to prevent your worker threads from being blocked by I/O. Instead of making your thread pool bigger to compensate for blocking I/O, you should use a non-blocking database driver and eliminate blocking I/O altogether.
Different approaches exist for different databases. Some support async I/O natively, some provide the illusion by maintaining a separate thread pool for DB I/O. Some integrate with higher-level abstractions such as Reactive Streams.
For Redis, for example, there is an alternative Java driver called Lettuce, that provides an asynchronous API and a reactive API. (Disclaimer: I have not used Lettuce myself.)
Upvotes: 1