Christian Ivicevic
Christian Ivicevic

Reputation: 10885

How to prevent update anomalies with multiple clients running non-atomic computations concurrently in PostgreSQL?

I am running three PostgreSQL instances using replication (1 master, 2 slaves) which are accessed by two separate servers:

In order to access and manipulate the data I am using an ORM library which allows me to write code as follows:

const resources = await repository.findById(1337);
// some complex computation
resources.iron = computeNewIron(resources.iron);
await repository.save(resources);

Of course it might occur that the API wants to deduct a specific amount of resources right when the server handling the ticks is trying to update the amount of resources which can cause either of the servers to assume a certain amount of resources that is incorrect, basically your typical UPDATE anomaly.

My problem is that I am not just writing a "simple" atomic query such UPDATE table SET iron = iron + 42 WHERE id = :id. The ORM library is internally using a direct assignment that is not self-referencing the respective columns which yields something akin to UPDATE table SET iron = 123 WHERE id = :id where the amount has been computed previously.

I can just assume that it's possible to prevent the mentioned anomaly if I use manually written queries that are incrementing/decrementing the values atomically with self-references. I'd like to know which other options can alleviate the issue. Should I wrap my SELECT/computation/UPDATE in a transaction? Does this suffice?

Upvotes: 0

Views: 432

Answers (1)

Laurenz Albe
Laurenz Albe

Reputation: 246493

Your question is a bit unclear, but if your transaction spans several statements, yet needs to have a consistent state of the database, there are basically two options:

  1. Use pessimistic locking: when you read values from the database, do it with SELECT ... FOR UPDATE. Then the rows are locked for the duration of your transaction, and no concurrent transaction can modify them.

  2. Use optimistic locking: start your transaction in REPEATABLE READ isolation level. Then you see a consistent snapshot of the database for the whole duration of your transaction. If somebody else modifies your data after you read them, your UPDATE will cause a serialization error and you'll have to retry the transaction.

Optimistic locking is better if conflicts are rare, while pessimistic locking is preferable if conflicts are likely.

Upvotes: 1

Related Questions