Reputation: 1964
I have a batch job with the following definition:
<batch:job id="job">
<batch:split id="main" task-executor="simpleAsyncTaskExecutor">
<batch:flow>
<batch:step id="getAccountDetails">
<batch:tasklet ref="getAccountDetailsTasklet"/>
</batch:step>
</batch:flow>
<batch:flow>
<batch:step id="processAccounts">
<batch:tasklet transaction-manager="transactionManager" task-executor="threadPoolTaskExecutor" throttle-limit="${processor.maxThreads}">
<batch:chunk reader="queueReader" writer="myCustomItemWriter" commit-interval="${processor.commitInterval}"/>
</batch:tasklet>
</batch:step>
</batch:flow>
</batch:split>
</batch:job>
myCustomItemWriter basically goes through a list of accounts passed along by the queueReader and commits them to the database.
The job is scaled to run 100 threads of that chunk in parallel. In myCustomItemWriter's class, I have a private property that maintains the sum of a specific BigDecimal property of every account it processes. So if there are 10000 accounts, I'll have 100 threads, each processing 100 accounts. I want to capture the sum of this property for all these 10000 accounts.
Here's my question: is ItemWriter singleton (and therefore, just a private property to maintain this sum is enough)? If not, should I define my counter as AtomicReference bean and inject it in my writer so that the same instance of the property is injected in all 100 threads?
Upvotes: 4
Views: 2122
Reputation: 4158
If you annotate your writer with @Component
the default scope will be singleton.
But all batch artifacts are instantiated prior to their use in the scope in which they are declared in the Job XML and are valid for the life of their containing scope. There are two scopes that pertain to artifact lifecycle: job
and step
.
In your Case you can annotate your CustomItemWriter with @Scope("step")
and since you are running a multi threaded batch, each thread will create its own instance of your myCustomItemWriter
which will live just for the current executing step.
Upvotes: 1