Adam Skiba
Adam Skiba

Reputation: 633

PHP + Locking MySQL Table fails

I have a table that needs to be locked from being inserted but it also needs to be able to be updated while inserts are prevented.

function myfunction() {
  $locked = mysql_result(mysql_query("SELECT locked FROM mylock"),0,0);
  if ( $locked ) return false;
  mysql_query("LOCK TABLES mylock WRITE");
  mysql_query("UPDATE mylock SET locked=1");
  mysql_query("UNLOCK TABLES");

  /* I'm checking another table to see if a record doesn't exist already */
  /* If it doesn't exist then I'm inserting that record */

  mysql_query("LOCK TABLES mylock WRITE");
  mysql_query("UPDATE mylock SET locked=0");
  mysql_query("UNLOCK TABLES");  
}

But this isn't enough, the function is called again from another script and simultaneously inserts are happening from the 2 calls to the function, and I can't have that because it's causing duplicate records.

This is urgent please help. I thought of using UNIQUE on the fields but there are 2 fields (player1, player2), and NEITHER cannot contain a duplicate of a player ID.

Unwanted behavior: Record A = ( Player1: 123 Player2: 456 ) Record B = ( Player1: 456 Player2: 123 )

Upvotes: 1

Views: 1449

Answers (3)

sbutler
sbutler

Reputation: 637

I just noticed you suffer form a race condition in your code. Assuming there isn't an error (see my comments)... two processes could check and get "not locked" result. The "LOCK TABLES" will serialize their access, but they'll both continue on thinking they have the lock and thus duplicate records.

You could rewrite it as this:

mysql_query("LOCK TABLES mylock WRITE");
mysql_query("UPDATE mylock SET locked=1 WHERE locked=0");
$have_lock = mysql_affected_rows() > 0;
mysql_query("UNLOCK TABLES");
if (!$have_lock ) return false;

Upvotes: 1

Md Mahbubur Rahman
Md Mahbubur Rahman

Reputation: 2075

You don't need table level locking here, better use row level locking. Row level locking means only the one row they're modifying is locked. The usual alternatives are to either lock the entire table for the duration of the modification, or else to lock some subset of the table. Row-level locking simply reduces that subset of the rows to the smallest number that still ensures integrity.

In the InnoDB transaction model, the goal is to combine the best properties of a multi-versioning database with traditional two-phase locking. InnoDB does locking on the row level and runs queries as nonlocking consistent reads by default, in the style of Oracle. The lock table in InnoDB is stored so space-efficiently that lock escalation is not needed: Typically, several users are permitted to lock every row in InnoDB tables, or any random subset of the rows, without causing InnoDB memory exhaustion.

If your problem yet not solved, then memory size may be the issue. InnoDB stores its lock tables in the main buffer pool. This means that the number of locks you can have at the same time is limited by the innodb_buffer_pool_size variable that was set when MySQL was started. By default, MySQL leaves this at 8MB, which is pretty useless if you're doing anything with InnoDB on your server.

Luckily, the fix for this issue is very easy: adjust innodb_buffer_pool_size to a more reasonable value. However, that fix does require a restart of the MySQL daemon. There's simply no way to adjust this variable on the fly (with the current stable MySQL versions as of this post's writing).

Before you adjust the variable, make sure that your server can handle the additional memory usage. The innodb_buffer_pool_size variable is a server wide variable, not a per-thread variable, so it's shared between all of the connections to the MySQL server (like the query cache). If you set it to something like 1GB, MySQL won't use all of that up front. As MySQL finds more things to put in the buffer, the memory usage will gradually increase until it reaches 1GB. At that point, the oldest and least used data begins to get pruned when new data needs to be present.

Upvotes: 0

Jocelyn
Jocelyn

Reputation: 11393

I suggest not using locks at all. Instead, when insterting the data, do like this:

mysql_query("INSERT IGNORE INTO my_table VALUES(<some values here>)");
if(mysql_affected_rows()>0)
{
    // the data was inserted without error
    $last_id = mysql_insert_id();
    // add what you need here
}
else
{
    // the data could not be inserted (because it already exists in the table)
    // query the table to retrieve the data
    mysql_query("SELECT * FROM my_table WHERE <some_condition>");
    // add what you need here
}

When adding the IGNORE keyword to an INSERT statement, MySQL will attempt to insert the data. If it doesn't work because a record with the same primary key is already in the table, it will fail silently. mysql_affected_rows is used to know the number of inserted records and decide what to do.

Upvotes: 0

Related Questions