abdelhak ezzaidi
abdelhak ezzaidi

Reputation: 1

Spring batch multiple output file for input files with MultiResourceItemReader

I am using MultiResourceItemReader to read from multiple CSV files that have lines of ObjectX(field1,field2,field3...) but the problem is that when the processor ends the writer gets all the lines of ObjectX from all the files. and I have to write the data accepted in a file with the same name as inputFile. I am using DelimitedLineAggregator is there a way to have a writer for each file while using MultiResourceItemReade because the writer accepts only one resource at a time?

this is an example of what I have

@Bean
public MultiResourceItemReader<ObjectX> multiResourceItemReader() 
{
  MultiResourceItemReader<ObjectX> resourceItemReader = new MultiResourceItemReader<ObjectX>();
  resourceItemReader.setResources(inputResources);
  resourceItemReader.setDelegate(reader());
  return resourceItemReader;
}

@Bean
public FlatFileItemReader<ObjectX> flatFileItemReader() {
  FlatFileItemReader<ObjectX> flatFileItemReader = new FlatFileItemReader<>();
  flatFileItemReader.setComments(new String[]{});    
  flatFileItemReader.setLineMapper(lineMapper());

  return flatFileItemReader;
}
@Override
@StepScope
public Sinistre process(ObjectX objectX) throws Exception {
//business logic

    return objectX;
}

@Bean
@StepScope
public FlatFileItemWriter<Sinistre> flatFileItemWriter(
        @Value("${doneFile}") FileSystemResource doneFile,
        @Value("#{stepExecution.jobExecution}") JobExecution jobExecution
) {
    
    FlatFileItemWriter writer = new FlatFileItemWriter<Sinistre>() {
        private String resourceName;
        @Override
        public String doWrite(List<? extends ObjectX> items) {

            //business logic
            //business logic
            //business logic
            return super.doWrite(items);
        }
    };

    DelimitedLineAggregator delimitedLineAggregator = new DelimitedLineAggregator();
    delimitedLineAggregator.setDelimiter(";");
    BeanWrapperFieldExtractor beanWrapperFieldExtractor = new BeanWrapperFieldExtractor();
    beanWrapperFieldExtractor.setNames(new String[]{"field1", "field2", "field3", "field4".......});
    delimitedLineAggregator.setFieldExtractor(beanWrapperFieldExtractor);
    writer.setResource(doneFile);
    writer.setLineAggregator(delimitedLineAggregator);


    // how to write the header
    writer.setHeaderCallback(new FlatFileHeaderCallback() {
        @Override
        public void writeHeader(Writer writer) throws IOException {
            writer.write((String) jobExecution.getExecutionContext().get("header"));
        }
    });
    writer.setAppendAllowed(false);

    writer.setFooterCallback(new FlatFileFooterCallback() {
        @Override
        public void writeFooter(Writer writer) throws IOException {
            writer.write("#--- fin traitement ---");
        }
    });
    return writer;
}

this is what I called ObjectX

public class SinistreDto  implements ResourceAware {

    private String codeCompagnieA;//A
    private String numPoliceA;//B
    private String numAttestationA;//C
    private String immatriculationA;//D
    private String numSinistreA;//E
    private String pctResponsabiliteA;//F

    private String dateOuvertureA;//G

    private String codeCompagnieB;//H
    private String numPoliceB;//I
    private String numAttestationB;//J
    private String immatriculationB;//K
    private String numSinistreB;//L
    private Resource resource;

}

and this is the CSV file's data (I will have a bunch of files with data exactly like this)

38;5457;16902-A;0001-02-34;84485;000;20221010 12:15;55;5457;W3456;22-A555 76;544687;16902;1234-56;8448;025;20221010 12:15;22;544687;WW456;22-A555 65;84987;16902;WW 123456;74478;033;20221010 12:15;88;84987;WW3456;22-A555

this is how I expect the output file for each input file.

#header

38;5457;16902-A;0001-02-34;84485;000;20221010 12:15;55;5457;W3456;22-A555 76;544687;16902;1234-56;8448;025;20221010 12:15;22;544687;WW456;22-A555 65;84987;16902;WW 123456;74478;033;20221010 12:15;88;84987;WW3456;22-A555

#--- fin traitement ---

Upvotes: 0

Views: 740

Answers (1)

Mahmoud Ben Hassine
Mahmoud Ben Hassine

Reputation: 31600

I see no difference between the input file and output file except the header and trailer lines. But that is not an issue, you probably omitted the processing part as it is not relevant to the question.

I believe the MultiResourceItemReader is not suitable for your case as data from different input files can end up in the same chunk, and hence written to the same output file, which is not what you want.

I think a good option for your use case is to use partitioning, where each partition is a file. This way, each input file will be read, processed and written to a corresponding output file. Spring Batch provides the MultiResourcePartitioner that will create a partition per file. You can find an example here: https://github.com/spring-projects/spring-batch/blob/main/spring-batch-samples/src/main/resources/jobs/iosample/multiResource.xml.

Upvotes: 1

Related Questions