Reputation: 9
Hi I have a Postgres table, that gets written to when users login to my application. The query performs an insert on conflict and gets called about 150 times per second and I am seeing alot of time spent in lock wait.
The query itself takes around 50ms to execute on average.
Is there a better way to improve my query to reduce lock waits?
There are two tables:
Users
ID UUID (uuid) PK | Identifier (varchar(100)) | name | project (uuid FK) | Team | Attributes (json) |
---|---|---|---|---|---|
abcd-2343-errere | bob | Bob | 34543-54545-435435-5345 | Squad1 | {'age' : 29, 'tags' : ['one'] } |
3432-2343-234342 | geroge | George | 34543-54545-435435-5345 | Squad1 | {'age' : 43, 'tags' : ['ten'] } |
234343-42343-4324324 | john | John | 23434-234-4234324-432 | Squad2 | {'age' : 54, 'tags' : ['two'] } |
When the user logs in, we insert to the Users table, unless there is a conflict where the the Identifer and Project match, in that case we update the name and attributes columns.
INSERT INTO
"users" ("identifier",
"name",
"attributes",
"team",
"project")
VALUES
($1,$2,$3,$4,$5)
ON
CONFLICT ("project",
"identifier") DO
UPDATE
SET
"name"="excluded"."name",
"attributes"="excluded"."attributes"
There are two indexes on the table
Is there a better way to write this to reduce lock wait time on the table? I was wondering if I would be better performing a batch insert, with so many queries coming in per second?
Upvotes: 0
Views: 894
Reputation: 247950
There is only two things you can do to improve that:
make the statement faster, perhaps by reducing the number of indexes and triggers on the table, or by getting faster hardware
see that the statement is called as late in the transaction as possible, so that the lock on the row is held for the shortest possible time
Upvotes: 1