Thomas
Thomas

Reputation: 4719

Mysql deadlock explanation

I need some help solving a deadlock situation I am facing.

The below is a test script I have that simulates the issue I am having I have a tags table which, per request, might insert/update a lot of entries, all in one transaction.

I know that the deadlock happens because a lot of threads lock the same entries and I would like some feedback as to how I can avoid them

I am running ab -n 100 -c 5 http://localhost/script.php

This the table

CREATE TABLE `tags` 
(
  `id` INT(8) UNSIGNED NOT NULL AUTO_INCREMENT,
  `name` VARCHAR(40) COLLATE utf8_unicode_ci DEFAULT NULL,
  `weight` INT(8) DEFAULT NULL,
  `type` ENUM('1','2','3') COLLATE utf8_unicode_ci DEFAULT NULL,
  `modified` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY  (`id`),
  UNIQUE KEY `tag_name` (`name`)
) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci

<?php
function go() {

    $db = DB::getInstance();
    $db->start_transaction();
    $tgs = array('electricity', 'emergency', 'trees', 'New Jersey', 'Canada funnelled');
    foreach($tgs as $tg) {
        $arr_tags = array(
            'name' => $tg,
            'weight' => '0',
            'type' => TagTable::PRIMARY
        );
        $tag_instance = new Tag($arr_tags);
        $tag_instance->save(true);
        // the save method executes a query like the below
        // INSERT INTO tags (weight, type, modified, id, name) VALUES(0, "1", NULL, NULL, "$tag") ON DUPLICATE KEY UPDATE weight = weight+1;

    }
    $db->commit();
}
go();
?>

Upvotes: 1

Views: 1441

Answers (1)

RolandoMySQLDBA
RolandoMySQLDBA

Reputation: 44333

Surprisingly, even with InnoDB deadlocks can occur. The reason for this?

The clustered index (known internally as the gen_clust_index) can be intermittently locked in the middle of a transaction involving bulk INSERTs. What is interesting about INSERTs in transactions has a lot to do with the InnoDB log files. The previous state of a row of data you INSERT is one of a nonexistent row. Such MVCC data representing a nonexistent row would be recorded in the redo logs to support rollbacks, dirty reads, and things like these. Doing bulk INSERTs will create a lot of these nonexistent rows for rollback.

I once went through a series of questions from one person in the DBA StackExchange about deadlock situations with consecutive UPDATEs. Here are those questions and my answers:

You may have to try executing the bulk INSERTs without a transaction with either of the following:

  • with a COMMIT after each INSERT
  • use AUTOCOMMIT=1

If you must have the INSERTs in a transaction, try increasing the size of the InnoDB Log Files and the bulk insert buffer:

Step 01) Add this line to /etc/my.cnf

[mysqld]
bulk_insert_buffer_size=256M
innodb_log_file_size=2047M

Step 02) service mysql stop

Step 03) rm -f /var/lib/mysql/ib_logfile[01]

Step 04) service mysql start

During the startup of mysql, ib_logfile0 and ib_logfile1 are recreated

Give it a Try !!! I hope it helps.

UPDATE 2011-10-31 12:43 EDT

I just answered a question like this in the DBA StackExchange 3 days ago

Upvotes: 3

Related Questions