Kubero
Kubero

Reputation: 21

Spring batch Lock when Reader and Writer works with the same table

We are evaluating spring batch (version 3.0.3), and for it we have made a simple data process

Our test:

We have 500k rows in a table, and we want to process it updating a field as a flag of "row processed"

Our job definition is like this:

<beans:beans xmlns="http://www.springframework.org/schema/batch"
    xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/batch
        http://www.springframework.org/schema/batch/spring-batch.xsd"
    profile="job1">

    <job id="job1">
        <step id="step1">
            <tasklet>
                <chunk reader="itemReader"  writer="itemWriter"
                    commit-interval="#{jobParameters['commit.interval']?:'100'}" />     
            </tasklet>
        </step>
    </job>

We are using a JdbcCursorItemReader as reader (query like SELECT field1, field2 FROM table WHERE field3 = '2Process') and a JdbcBatchItemWriter as writer (UPDATE table SET field3 = 'Processed' WHERE field1 = :field1 and field2 = :field2)

Our database is a DB2/zOS configured with a lock by page in the table we are using

We use Bitronix as Transaction Manager

When we launch it, we are obtaing this Exception:

Caused by: com.ibm.db2.jcc.am.SqlException: Error for batch element #8: DB2 SQL Error: SQLCODE=-913, SQLSTATE=57033, SQLERRMC=00C9008E;00000302;DBABCD  .TSTABLE .X'000201', DRIVER=4.19.26

It seems that the exception occurs because the Tx of the reader is not the same that the Tx for the writer for the database. And we are having a block in page 201 for it.

Obviously, if we change the isolation level of the select, adding with ur, the batch is finishing ok.

Our question is, this database lock, could be a problem caused for transaction manager because bitronix doesn't support nested trasactions? Or it seems to be a problem with db2 database and the locking policy defined

Also, we don't know if is a good practice having a spring batch with a step that reads and writes to the same table

Edit: Sorry, you are right, big mistake in my explanation, db2/zOS

Upvotes: 1

Views: 4804

Answers (1)

Alexander Petrov
Alexander Petrov

Reputation: 9492

Hello first since you have a page locking and not a row locking when having an update intensivle operation such as a batch. Having deadlocks is inevitable. So you have to just accept it. What you need to do is to create architecture that can tolerate it in a good way.

First I would strongly recomend you to switch from Page locking to Row locking. The very idea of Page locking is to enable the database for extremly read intensive applications. I seriously doubt your application is such which means that the small overhead the row locking would present is negligable trade off for the conviniese it would present.

But lets presume you don't switch the page locking what are your alternativesin order to reduce to minimum the deadlocks:

  1. Reduce the size of the page!
  2. Shuffle the records. For example don't make the batch process sequentialy the IDS if you use 10 threads make the first thread to select every 10th record snd thread to select every 9th record. Even best sort it by some random factor when you select the readers.
  3. Use batching but do not batch multiple records. You can decrease the chunk size as much as possible. This way you will decrease the overal locking factor.

Basicly whatever you do you need to think in the direction how can I ensure that I have as short transactions as possible and hit as many different pages as possible.

But even then you can not guarantee that at least one deadlock is not hit. Which means you need to implement RETRY logic. That is mandatory.

And as a conclusion, just save yourself all this hussle and configure DB2 for row locking.

Probably if it was me I would have set th chunk size to 1-2-3 something like that, but definatly not 100.

Upvotes: 2

Related Questions