Reputation: 922
Currently, I am using an executor channel with an executor as the output channel of a splitter that splits a message into 2 different messages to be run in parallel.
I have set up that output channel as seen below:
<int:channel id="splitter_output">
<int:dispatcher task-executor="executor"/>
</int:channel>
<task:executor id="executor" pool-size="4"/>
I have been unable to totally understand how this pool-size works when multiple requests are sent simultaneously. If I was to send in 1 request into my application though this flow, it would result in 2 different messages across the "splitter_output" channel. If I were to send in 3 requests to my application it would result in 6 different messages across the "splitter_output" channel as each request flows into a splitter that splits the message into two separate messages.
Is this pool-size set per request, where each request would spawn two executor threads to continue down the flow?
Or is it application wide where the first two requests would result in (4) threads being created and run through the flow, then once one of those requests finished, the third request, would create (2) threads and continue down the flow?
Upvotes: 0
Views: 474
Reputation: 121262
First of all it is not a Spring Integration feature, rather Spring Framework Core one.
Here is a docs on the matter: https://docs.spring.io/spring/docs/5.2.3.RELEASE/spring-framework-reference/integration.html#scheduling
Regarding that pool-size
, see its description:
The size of the executor's thread pool as either a single value or a range
(e.g. 5-10). If no bounded queue-capacity value is provided, then a max value
has no effect unless the range is specified as 0-n. In that case, the core pool
will have a size of n, but the 'allowCoreThreadTimeout' flag will be set to true.
If a queue-capacity is provided, then the lower bound of a range will map to the
core size and the upper bound will map to the max size. If this attribute is not
provided, the default core size will be 1, and the default max size will be
Integer.MAX_VALUE (i.e. unbounded).
That <task:executor>
is backed by the ThreadPoolTaskExecutor
. Here is its JavaDocs:
* JavaBean that allows for configuring a {@link java.util.concurrent.ThreadPoolExecutor}
* in bean style (through its "corePoolSize", "maxPoolSize", "keepAliveSeconds", "queueCapacity"
* properties) and exposing it as a Spring {@link org.springframework.core.task.TaskExecutor}.
* This class is also well suited for management and monitoring (e.g. through JMX),
* providing several useful attributes: "corePoolSize", "maxPoolSize", "keepAliveSeconds"
* (all supporting updates at runtime); "poolSize", "activeCount" (for introspection only).
I'm not sure what made you think that threads behavior is about "per-request". In fact it is a global singleton bean with that 4
threads config and every one who would like to perform a task on this executor has to share threads with all others. So, independently of the amount of your requests, only 4 threads are going to work here. Everything else awaits in the internal queue.
Upvotes: 1