joshstrike
joshstrike

Reputation: 1823

Prevent a MySQL dirty read from blocking an explicit write lock?

I have a situation involving a large point-of-sale / reservations system I run. Customers have running balances which are updated on a trigger when a purchase transaction occurs. That's to say that there are triggers on insert, update and delete on a table units which stores purchased units, and each of these triggers sums the customer's purchases and payments and updates a table called customer_balance. Keeping these balances updated without having to calculate them through joins constantly is essential. The business logic also dictates that sometimes multiple inserts or updates to units pertaining to a single customer can be sent asynchronously. When this happens it can lead to a deadlock where two triggers try to act on the same row in customer_balance. To prevent this, and since you can't put transactions in a trigger, there are a handful of very quick updates for which I call LOCK TABLES units WRITE, customer_balance WRITE, then do the purchase and let the trigger run, and then immediately UNLOCK. This occurs quickly and sporadically enough that it doesn't affect performance even during busy periods.

However. There are certain quarterly and annual reports that need to run extensive, very long SELECTs on these same tables, some as long as 30 seconds. When running the reports, I SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED. This seems to have helped mitigate a lot of other issues with the reports preventing implicit or row-level locks. But they still prevent the explicit LOCK TABLES calls. And so when these reports are run during peak hours, checkout transactions can freeze for the entire time the report is running.

My question is: Beyond setting isolation level to READ UNCOMMITTED, is there any way to prevent a long-running SELECT from blocking an explicit write lock?

Upvotes: 3

Views: 1420

Answers (2)

Rick James
Rick James

Reputation: 142298

  1. Do not use LOCK TABLES with InnoDB. It's a sledgehammer. Sledgehammers are dangerous, as you are finding out.
  2. Test for deadlock. When one occurs, replay the entire transaction. The replay is likely to be faster than waiting for the unlock.
  3. Leave the transaction_isolation at the default (probably).
  4. Build and maintain summary tables. This way those 30+ second reports won't take nearly that long. http://mysql.rjweb.org/doc.php/summarytables

Upvotes: 1

Bill Karwin
Bill Karwin

Reputation: 562310

The table locks acquired by LOCK TABLES are metadata locks. Any query holds a metadata lock on the table, and will block other sessions from doing LOCK TABLES, just as it blocks other DDL statements like ALTER TABLE, RENAME TABLE, TRUNCATE TABLE, or even a CREATE TRIGGER that references that table.

This has nothing to do with the READ UNCOMMITTED isolation level for the transaction. In 30 years of using SQL databases, I have never found a good use for READ UNCOMMITTED. I recommend avoiding that isolation level. It doesn't reduce locking requirements, and it only leads to phantom reads, because your query may read data in a state that has incomplete updates, or which may be rolled back.

To solve your long-running queries that block updates, the typical solution is to create a replica database instance where you run the long report queries. Even if the report queries block updates on the replica, that's okay, because it doesn't block them on the source instance. Replication is asynchronous, so once the report query is done, the updates resume and gradually catch up.

Another common solution is to use ETL to move the data to a data warehouse, where report queries over large datasets is more efficient anyway because of the way the data is stored.

Upvotes: 3

Related Questions