Reputation: 703
I'm using a reader to get a stored RequestDto
in the context:
public class ItemReaderStoredData implements ItemReader<List<RequestDto>> {
private List<RequestDto> requestListDto = new ArrayList<>();
@BeforeStep
public void retrieveInterStepData(StepExecution stepExecution) {
JobExecution jobExecution = stepExecution.getJobExecution();
ExecutionContext jobContext = jobExecution.getExecutionContext();
this.requestListDto = (List<RequestDto>)jobContext.get("directRequestListDto");
}
@Override
public List<RequestDto> read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
return requestListDto;
}
}
I want to check the presence of each of the uuid
of the RequestDto
list in the database by adapting cursorItemReader
, used in the stepConfig
. The purpose is to write the non redundant item in a file.
@Bean
public JdbcCursorItemReader<HistoricDto> cursorItemReader() {
JdbcCursorItemReader<HistoricDto> reader = new JdbcCursorItemReader();
reader.setSql("select * from tab");
reader.setDataSource(this.dataSource);
reader.setRowMapper(new TableMapperDto());
return reader;
}
How can I use the output of reader one in reader two, in the same step? Or is it possible to merge both code in one separate class?
Edit: I'm thinking to merge both: putting JdbcCursorItemReader
in a separate class with the @BeforeStep
of the ItemReaderStoredData
would solve my problem. But I have issue with Datasource
Error
which is Null
. Is merging them in this way safe and how to do this?
Upvotes: 0
Views: 268
Reputation: 31620
It is not recommended to store lists of items in the execution context as it is persisted at chunk/step boundaries (and this might be expensive in terms of performance).
I want to read xlsx file (input), populate a DTO, Check redunduncy in the database, filter my DTO List, Write in a a file1 new item (or database) and log redundant Item in a file2'(output).
There are several ways to implement this without having to store items in a list and share them through the execution context. Here are a few options:
ItemProcessListener
. Non existing items will not be filtered by the processor and can be written to the db using the step's item writer.SkipListener
instead of a ItemProcessListener
. In this case, the processor would throw an exception for existing items (which should be declared as skippable) instead of returning null
for existing items (ie filter them).ClassifierCompositeItemWriter
can then be used to classify items and write them where appropriate.There are other techniques like loading the file in a staging table and do the comparison between both tables, but those depend on the use case (input size, comparison logic, etc).
Upvotes: 1