Zsolt János
Zsolt János

Reputation: 511

In PostgreSQL, do multiple UPDATEs to different rows in the same table have a conflicting locks?

I'm wondering about an update I am making to a large table, and whether I need to worry about locks.

I have a table looking like this:

CREATE TABLE "ItemsToProcess"( 
"id" text, 
"WorkerInstanceId" text, 
"ProcessingStartTime" timestamp with time zone, 
"UpdatedTime" timestamp with time zone, 
CONSTRAINT "ITP_PK" PRIMARY KEY ("id")
)WITH (
  OIDS=FALSE
);

Initially, this table has ~2.0 million rows, and only the id column filled in - WorkerInstanceId and the two timestamps are NULL by default and on the start of the run.

What happens is that some worker apps (at least two, but will be around 10-13 in production) will mark a batch of IDs (I plan to set batchSize to 200) from this table for them to process. What happens during processing doesn't really matter now.

The marking of a batch looks like this:

UPDATE "ItemsToProcess" 
   SET "WorkerInstanceId" = ?, "ProcessingStartTime" = current_timestamp()
 WHERE "WorkerInstanceId" is NULL
 LIMIT 200;

My question is, do I need to worry about locking the rows I'm going to update before making the update?

Postgres documentation says:

ROW EXCLUSIVE

Conflicts with the SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes.

The commands UPDATE, DELETE, and INSERT acquire this lock mode on the target table (in addition to ACCESS SHARE locks on any other referenced tables). In general, this lock mode will be acquired by any command that modifies the data in a table.

So I think that whenever one of the workers makes this update, the whole table is locked, 200 rows are updated and the lock is freed up in the end. Until the lock is in place, the other workers are waiting for the lock to free up. Is this right or am I missing something?

Upvotes: 5

Views: 8143

Answers (2)

Craig Ringer
Craig Ringer

Reputation: 325091

UPDATE locks the row, so you do not need to lock it first. If you try to UPDATE overlapping sets of rows simultaneously, the second UPDATE will wait for the first's transaction to commit or roll back.

The big problem with your approach - other than the fact that UPDATE doesn't have a LIMIT clause - is that multiple workers will all try to grab the same rows. Here's what happens:

  • worker1: Filters the table to find 200 rows and locks them
  • worker1: starts updating rows
  • worker2: filters the table to find 200 rows
  • worker2: tries to start updating rows, but has selected the same rows as worker1 so it blocks on worker1's lock
  • worker1: Finishes updating rows
  • worker2: After lock release, re-checks the WHERE condition and finds out that none of the rows match anymore because worker1 has updated them. Updates zero rows.

... and repeat!

You need to either:

  • Have a central queue handing out rows in a proper concurrency-safe way; or
  • Assign workers non-overlapping ranges of IDs to work on

As for LIMIT - you could use WHERE id IN (SELECT t.id FROM thetable t LIMIT 200 ORDER BY id) - but you'd have the same problem with both workers choosing the same set of rows to update.

Upvotes: 10

Mark Stosberg
Mark Stosberg

Reputation: 13411

You are missing a couple of things.

First, PostgreSQL does not offer a LIMIT option for update. See the docs for UPDATE.

Second, note that ROW EXCLUSIVE does not conflict with itself, it conflicts with SHARE ROW EXCLUSIVE which is different. So, your UPDATE statements can safely run concurrently from multiple workers. You still will want your update times to be low. However, you already have a built-in way to tune that by lowering your batchSize if you run into problems.

Upvotes: 2

Related Questions