Reputation: 3679
The PostgreSQL documentation 13.2.3. Serializable Isolation Level states:
It is important that an environment which uses this technique have a generalized way of handling serialization failures (which always return with a SQLSTATE value of '40001'), because it will be very hard to predict exactly which transactions might contribute to the read/write dependencies and need to be rolled back to prevent serialization anomalies.
Is this as simple as always immediately retrying the transaction until it succeeds? Pseudocode:
while (true) {
try {
BEGIN TRANSACTION
...
COMMIT TRANSACTION
}
catch (SQLSTATE == '40001') {
continue; // retry
}
break; // succeed
}
Is it guaranteed to succeed at some point? Are there situations where it's impossible that this will ever succeed after failing once? What else can / should one do? Does it depend on the scenario? Maybe sometimes earlier query transactions must be repeated too? In that case, how is a generalized solution possible?
Are there any open source examples of such a generalized handling mechanism? Or any good resources (books, articles, ...) describing different approaches?
Upvotes: 0
Views: 85
Reputation: 248165
Your code sample shows exactly how it should be done.
Serialization failures should happen rarely; if they don't, something in your database or application design is unsound.
I guess it it possible to construct a scenario where a transaction can never succeed, but that is no problem in practice.
Upvotes: 1