Matt Larsen
Matt Larsen

Reputation: 481

Database race conditions

I've heard about many application developers having a bit of trouble in regards to race conditions in database processing. A typical example goes something like this:

In this example, the numStock field should have become 1, but it was set to 2 instead due to the race between users.

So of course locks can be used, but I've thought of another way of handling this - passing all row details as WHERE criteria. Let me explain...

In the example above, the SQL codes might look like this:

//select

SELECT itemID, numStock FROM items WHERE itemID = 45

//update

UPDATE items SET numStock = 2 WHERE itemID = 45

My idea for resolving the race:

//select

SELECT itemID, numStock FROM items WHERE itemID = 45

//update

UPDATE items SET numStock = 2 WHERE itemID = 45 AND numStock = 3

Thus, the query checks if the data has changed since it SELECT-ed the data. So my question is: (1) Would this [always] work? and (2) is this a better option compared to database locking mechanisms (eg, MySQL Transactions)?

Thanks for your time.

Upvotes: 13

Views: 24186

Answers (3)

Martin Thoma
Martin Thoma

Reputation: 136585

Every interaction with databases is a transaction - either implicit when you just write a single statement or explicit when you use BEGIN / COMMIT / ROLLBACK.

Then you use a transaction isolation level which defines which phenomena can occur. Typical phenomena are called dirty read, non-repeatable read, phantom read. Typical isolation levels are READ_COMMITTED, REPEATABLE_READ, SERIALIZABLE.

Let's look at your specific example (time flows down):

T1                T2
-----------------------------
x := r[numStock]
                  y := r[numStock]
w[numStock] = x-1
                  w[numStock] := y-1

Hence the write of T2 was on stale data. This is a lost update. Some databases, such as Postgres, prevent lost updates. When the transaction is tried to be committed in the REPEATABLE_READ isolation level or higher, an exception is thrown:

ERROR: could not serialize access due to concurrent update

However, I've heard that the InnoDB engine in MySQL does not have a check for lost updates (source).

The mentioned transaction isolation levels specify which problems you want to prevent. They don't make any statement about how it is achieved. There is optimistic concurrency control (snapshot isolation) and pessimistic concurrency control (locks).

I'll soon also publish an article about those topics :-)

Upvotes: 2

Carlos
Carlos

Reputation: 352

What about making a select and an update in a single statement:

UPDATE counters SET value = (@cur_value := value) + 1 WHERE name_counter = 'XXX';

and then

SELECT @cur_value;

Does this strategy can solve a Race Condition?

Upvotes: 2

Jens Schauder
Jens Schauder

Reputation: 81970

This strategy works and known as 'optimistic locking'. Thats because you do your processing assuming it will succeed and only at the end actually check if it did succeed.

Of course you need a way to retry the transaction. And if the chances of failure are very high it might become inefficient. But in most cases it works just fine.

Upvotes: 15

Related Questions