Reputation: 924
I am interested, what is best practice to avoid that small problem :)
Lets take 2 clients, both trying to insert rows in the table with unique constraint.
Suppose, that it is "url" column.
At one time there may appear the following situation:
1) Client A looks, is there url "a" in the table
2) Client A gets answer, that there is no that url
3) Client A is going to insert url "a"
4) Client B looks, is there url "a" in the table
5) Client B gets answer, that there is no that url
6) Client A inserted url "a"
7) Client B is going to insert url "a", but gets duplicate key error
Do I understand right, that the best way to avoid it - just make retries on application level?
Upvotes: 2
Views: 1148
Reputation: 19501
You can avoid the error with explicit locks taken out at the start of a transaction, but that can be bad for concurrency and throughput.
http://www.postgresql.org/docs/current/interactive/explicit-locking.html
If you don't block concurrent attempts, you need to deal with retries one way or another. Rather than putting that logic in the application code, you can encapsulate it within a function. The logic would be similar to the merge_db() example:
Upvotes: 2
Reputation: 26729
If you run in SERIALIZABLE isolation level, the client B will be forced to wait A's transaction to complete before the value for the url is returned. As this value does not exist yet, you can't place a lock on the record (there is no record), so gap locking on the index will be used instead. But as @kgrittn mentioned, this would affect negatively the performance and concurrency. So it's better to handle the duplicate key error
Upvotes: 1