Reputation: 909
There is a multithreaded application executing some PL/pgsql function. That function produces record inserts to a critically important resource( table ). Also it executes some select/update/etc operations while executing.
The issue is, sometimes we face duplicate( 2-3 ) records each one passed to the function in a parallel thread. And they all are inserted into table as a function execution result, while they should not.
It happens, because both transactions are executed in parallel, and have no idea that the same record is being prepared to insert in a parallel transaction.
The table is critically important and all kinds of LOCK TABLE
are extremely not welcomed (LOCK FOR SHARE MODE
meanwhile gave as some useful experience).
So, the question is, is there any best practice how to organize PL/pgsql function working with a critical resource (table) to be executed by multithreaded app and producing no harmful locks on this resource?
PS. I know, that some thread partinioning by record.ID in the app is a possible solution. But I.m interested in a PL/pgsql solution first of all.
Upvotes: 0
Views: 491
Reputation: 45795
Sometimes you can use a advisory locks - http://www.postgresql.org/docs/current/static/explicit-locking.html .With these locks some subset of numbers. I used it for synchronization of parallel inserts with success.
Upvotes: 2