Anshuman Chatterjee
Anshuman Chatterjee

Reputation: 962

How to avoid Deadlock in this scenario?

I have a innoDB table, in which, multiple connections might insert data, and a single MySql Event that runs every 10 secs deletes some previously inserted records.

But i am getting Deadlocks. How can i avoid them?

This code is responsible for Inserting the records into the table row-by-row.

sql = "INSERT INTO ex_result (Result_ID, ParentResult_ID, StepNumber, Name, Type, DB_ID, Session_ID, Status, TotalTime, FunctionCall, FunctionResult, ToolTime, PluginName, Screenshot_FID, IsNegative, ContinueOnError, WantSnapshot, Message, ResultCode, Output, InputArguments) VALUES (@Result_ID, @ParentResult_ID, @StepNumber, @Name, @Type, @DB_ID, @Session_ID, @Status, @TotalTime, @FunctionCall, @FunctionResult, @ToolTime, @PluginName, @Screenshot_FID, @IsNegative, @ContinueOnError, @WantSnapshot, @Message, @ResultCode, @Output, @InputArguments)"

The code for the event is:

DELIMITER //

CREATE EVENT EVENT_RESULT_CLEANER
ON SCHEDULE
EVERY 10 second
COMMENT 'Clean all delted results'
DO
BEGIN

DROP TEMPORARY TABLE IF EXISTS `Temp_Result_Purge`;

CREATE TEMPORARY TABLE `Temp_Result_Purge` (
`Result_ID` VARCHAR(63) NOT NULL,
PRIMARY KEY (`Result_ID`))
ENGINE = MEMORY;

INSERT INTO Temp_Result_Purge(Result_ID)
(
    SELECT t1.result_id
    FROM ex_result AS t1
    INNER JOIN ex_session as t2
    ON t1.session_id=t2.session_id
    WHERE NOT EXISTS(SELECT t3.result_id FROM ex_result as t3 WHERE t3.parentresult_id=t1.result_id)
    AND t2.DeletedOn IS NOT NULL
    LIMIT 2000
);

DELETE t1 FROM `ex_result` AS t1 INNER JOIN
`Temp_Result_Purge` AS t2 ON t1.Result_ID=t2.Result_ID;

DROP TEMPORARY TABLE `Temp_Result_Purge`;

END//

DELIMITER ;

Upvotes: 3

Views: 658

Answers (3)

Rick James
Rick James

Reputation: 142518

First I have some nasty things to say, then I will get to a possible solution.

"Don't queue it, just do it." -- MySQL does not make a good queuing engine.

Do add BEGIN...COMMIT (as already mentioned). And the BEGIN...COMMIT needs to be around the other code, too.

Do add code to test for deadlocks. Then replay the BEGIN...COMMIT. You cannot avoid all deadlocks, so plan for them.

Drop the LIMIT to only, say, 10. Then put the purger into a continual loop rather than waking up every 10 seconds. (If you are about to say "that makes things worse", then read on; I'll give you a variant that may work better.)

Use a LEFT JOIN ... WHERE ... IS NULL instead of NOT EXISTS ( SELECT ... )

Don't recreate the table over and over; just TRUNCATE TABLE. Or, better yet, just DELETE directly without going through a tmp table. However, that leads to another question...

How many rows does query have to go through to find the LIMIT rows? Keep in mind that the SELECT is interfering with the INSERTs. If it usually has to scan a million rows to find 2000 to delete, then we need to figure out a way to make the rows easier to find. For that, we need more details about your app, and the table sizes. Or ponder this...

One technique for politely scanning a million rows to find just a few, is to walk through the table 1000 rows at a time, usually using the PRIMARY KEY. Note: That's 1000 rows of the table, not 1000 rows eligible for deletion. After each 1000, DELETE the ones you found (if any), then move on to the next 1000. When you get to the end of the table, start over. Details on how to do this chunking is found here: http://mysql.rjweb.org/doc.php/deletebig#deleting_in_chunks

Upvotes: 1

Meier
Meier

Reputation: 3890

At least, you need to start a transaction in your event code.

This is not done implizit by the begin...end, see http://dev.mysql.com/doc/refman/5.6/en/commit.html

Within all stored programs (stored procedures and functions, triggers, and events), the parser treats BEGIN [WORK] as the beginning of a BEGIN ... END block. Begin a transaction in this context with START TRANSACTION instead.

Also, if you run this every 10 seconds, than think about changing your architecture. A relational database is not so good for data with short life span. An alternative may be messsage queue.

Upvotes: 1

Artjoman
Artjoman

Reputation: 286

In my opinion the best solution would be to use soft deletes

Just set the status of deleted object as deleted. In case the amount of records ir really HUGE and you don't want to store history data at all you can make a schedulled database purge nightly or weekly

One of the drawbacks will be you will have to rewrite the data retrieval logics, by adding one new condition

Upvotes: 0

Related Questions