Sundar
Sundar

Reputation: 105

Spring batch rollback all chunks & write one file at a time

I am a newbie in Spring batch and I have a couple of questions.

Question 1: I am using a MultiResourceItemReader for reading a bunch of CSV files and a JDBC Item writer to update the DB in batches. The commit interval is set to 1000. If there is a file with a 10k records and I encounter a DB error at the 7th batch is there any way I can roll back all the previously committed chunks?

Question 2: If there are two files each having 100 records and the commit interval is set to 1000 then the MultiResourceItemReader reads both files and sends it to the Writer. Is there any way we can just Write one file at a time ignoring the commit interval in this case essentially creating a loop in writer alone?

Upvotes: 1

Views: 6184

Answers (2)

Sundar
Sundar

Reputation: 105

Posting the solution that worked for me in case someone need it for reference.

For Question 1 I was able to achieve it using the StepListenerSupport in the writer and overriding the BeforeStep and AfterStep. Sample snippet as below

public class JDBCWriter extends StepListenerSupport implements ItemWriter<MyDomain>{

private boolean errorFlag;

private String sql = "{ CALL STORED_PROC(?, ?,  ?, ?, ?) }";

@Autowired
private JdbcTemplate jdbcTemplate;

@Override
public void beforeStep(StepExecution stepExecution){

    try{
        Connection connection = jdbcTemplate.getDataSource().getConnection();

        connection.setAutoCommit(false);
    }
    catch(SQLException ex){
        setErrorFlag(Boolean.TRUE);
    }
}

@Override
public void write(List<? extends MyDomain> items) throws Exception{

    if(!items.isEmpty()){

                CallableStatement callableStatement = connection.prepareCall(sql);

                callableStatement.setString("1", "FirstName");
                callableStatement.setString("2", "LastName");
                callableStatement.setString("3", "Date of Birth");
                callableStatement.setInt("4", "Year");

                callableStatement.registerOutParameter("errors", Types.INTEGER, "");

                callableStatement.execute();

                if(errors != 0){
                    this.setErrorFlag(Boolean.TRUE);
                    }
            }
    else{
        this.setErrorFlag(Boolean.TRUE);
    }
}

@Override
public void afterChunk(ChunkContext context){
    if(errorFlag){
        context.getStepContext().getStepExecution().setExitStatus(ExitStatus.FAILED); //Fail the Step
        context.getStepContext().getStepExecution().setStatus(BatchStatus.FAILED); //Fail the batch
    }
}

@Override
public ExitStatus afterStep(StepExecution stepExecution){
    try{
        if(!errorFlag){
            connection.commit();
        }
        else{
            connection.rollback();
            stepExecution.setExitStatus(ExitStatus.FAILED);
        }
    }
    catch(SQLException ex){
        LOG.error("Commit Failed!" + ex);
    }

    return stepExecution.getExitStatus();
}

public void setErrorFlag(boolean errorFlag){
    this.errorFlag = errorFlag;
    }
}

XML Config:

<?xml version="1.0" encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"
    ....
    http://www.springframework.org/schema/batch/spring-batch-3.0.xsd">

<job id="fileLoadJob" xmlns="http://www.springframework.org/schema/batch">

    <step id="batchFileUpload" >
        <tasklet>
            <chunk reader="fileReader"
                   commit-interval="1000"
                   writer="JDBCWriter"
            />
        </tasklet>
    </step>

</job>

<bean id="fileReader" class="...com.FileReader" />
<bean id="JDBCWriter" class="...com.JDBCWriter" />

</beans>

Upvotes: 2

Michael Minella
Michael Minella

Reputation: 21493

Question 1: The only way to accomplish this is via some form of compensating logic. You can do that via a listener (ChunkListener#afterChunkError for example), but the implementation is up to you. There is nothing within Spring Batch that knows what the overall state of the output is and how to roll it back beyond the current transaction.

Question 2: Assuming you're looking for one output file per input file, due to the fact that most Resource implementations are non-transactional, the writers associated with them do special work to buffer up to the commit point and then flush. The problem here is that because of that, there is no real opportunity to divide that buffer to multiple resources. To be clear, it can be done, you'll just need a custom ItemWriter to do it.

Upvotes: 0

Related Questions