majic bunnie
majic bunnie

Reputation: 1405

MySQL Under High Load, Race Condition?

I am experiencing what appears to be the effects of a race condition in an application I am involved with. The situation is as follows, generally, a page responsible for some heavy application logic is following this format:

Select from test and determine if there are rows already matching a clause.
If a matching row already exists, we terminate here, otherwise we proceed with the application logic
Insert into the test table with values that will match our initial select.

Normally, this works fine and limits the action to a single execution. However, under high load and user-abuse where many requests are intentionally sent simultaneously, MySQL allows many instances of the application logic to run, bypassing the restriction from the select clause.

It seems to actually run something like:
select from test
select from test
select from test
(all of which pass the check)
insert into test
insert into test
insert into test

I believe this is done for efficiency reasons, but it has serious ramifications in the context of my application. I have attempted to use Get_Lock() and Release_Lock() but this does not appear to suffice under high load as the race condition still appears to be present. Transactions are also not a possibility as the application logic is very heavy and all tables involved are not transaction-capable.

To anyone familiar with this behavior, is it possible to turn this type of handling off so that MySQL always processes queries in the order in which they are received? Is there another way to make such queries atomic? Any help with this matter would be appreciated, I can't find much documented about this behavior.

Upvotes: 3

Views: 333

Answers (2)

Borealid
Borealid

Reputation: 98499

The problem here is that you have, as you surmised, a race condition.

The SELECT and the INSERT need to be one atomic unit.

The way you do this is via transactions. You cannot safely make the SELECT, return to PHP, and assume the SELECT's results will reflect the database state when you make the INSERT.

If well-designed transactions (the correct solution) are as you say not possible - and I still strongly recommend them - you're going to have to make the final INSERT atomically check if its assumptions are still true (such as via an INSERT IF NOT EXISTS, a stored procedure, or catching the INSERT's error in the application). If they aren't, it will abort back to your PHP code, which must start the logic over.

By the way, MySQL likely is executing requests in the order they were received. It's possible with multiple simultaneous connections to receive SELECT A,SELECT B,INSERT A,INSERT B. Thus, the only "solution" would be to only allow one connection at a time - and that will kill your scalability dead.

Upvotes: 2

Niet the Dark Absol
Niet the Dark Absol

Reputation: 324690

Personally, I would go about the check another way.

Attempt to insert the row. If it fails, then there was already a row there.

In this manner, you check or a duplicate and insert the new row in a single query, eliminating the possibility of races.

Upvotes: 1

Related Questions