Reputation: 360
As part of the upgrade process, our product scripts update a stored procedure for a trigger. There are two daemons running, either of which can update the stored procedure. It seems that PostgrSQL is not serializing the DDL to upgrade the procedure. The exact error is "DB_Cursor: exception in execute: tuple concurrently updated". A Google search yields no exact matches for this error in the search results. It would appear we have a race condition. What is the best approach for avoiding or preventing such an exception? It prevents the upgrade process from succeeding and one or both of the processes (daemons) must be restarted to retry the upgrade and recover. Is there known issue with PostgreSQL? We are running PostgreSQL 9.2.5.
Upvotes: 1
Views: 1118
Reputation: 23910
If by design multiple concurrent clients can decide to perform DDL, then you really should make sure only one of them is doing it. You can do it using advisory locks.
Example in pseudocode:
function try_upgrade(db) {
if ( ! is_upgrade_needed(db) ) {
// we check it before acquiring a lock to speed up a common case of
// no upgrade available
return UPGRADE_NOT_NEEDED;
}
query_result = db->begin_transaction();
if ( query_result < 0 ) throw Error("begin failed");
query_result = db->query(
"select pg_advisory_xact_lock(?)", MAGIC_NUMBER_UPGRADE_LOCK
);
if ( query_result < 0 ) throw Error("pg_advisory_xact_lock failed");
// another client might have performed upgrade between the previous check
// and acquiring advisory lock
if ( ! is_upgrade_needed(db) ) {
query_result = db->rollback_transaction();
return UPGRADE_NOT_NEEDED;
}
perform_upgrade();
query_result = db->commit_transaction();
if ( query_result < 0 ) throw Error("commit failed");
return UPGRADE_PERFORMED;
}
Upvotes: 1
Reputation: 61656
It seems that PostgreSQL is not serializing the DDL to upgrade the procedure
Yes. This is mentioned from time to time on pgsql mailing lists, for example recently here:
'tuple concurrently updated' error when granting permissions
Excerpt:
We do have such locking for DDL on tables/indexes, but the theory in the past has been that it's not worth the trouble for objects represented by single catalog rows, such as functions or roles. You can't corrupt the database with concurrent updates on such a row, you'll just get a "tuple concurrently updated" error from all but the first-to-arrive update.
If you're concurrently replacing functions bodies, this is clearly your problem.
And the proposed solution is:
In the meantime, you could consider using an application-managed advisory lock if you really need such grants to work transparently.
Upvotes: 2