Reputation: 1345
I have a nodejs
app that does the following:
data = [someObjectWithDate1, someObjectWithDate2, someObjectWithDate3, ....]
//item contains age group
data.forEach(async(item)=>{
//db connection is already obtained at this point and it allows raw queries
//typically a knex instance wrapper function/object
await dbConnection().raw(
`
CREATE OR REPLACE FUNCTION my_function() RETURNS VOID LANGUAGE plpgsql AS $$
DECLARE
my_integer integer=${item.age_group};
my_record record;
BEGIN
SELECT * FROM my_table INTO my_record WHERE age_group=32;
--some weird calculation but, it's only a sample
UPDATE my_table SET age_group = my_record.age_group + my_integer
WHERE age_group = my_integer;
END;
$$;
BEGIN;
LOCK TABLE my_table IN ACCESS EXCLUSIVE LOCK;
SELECT * FROM my_function();
COMMIT;
`
)
})
now, no matter how many async functions execute parallely, postgres should block concurrent transactions and not give any concurrency issues right? (Albeit hindering performance, which is unnecessary at this point). But, when a large number of array is passed in data
, I get the following error:
tuple concurrently updated {
"length": 90,
"name": "error",
"severity": "ERROR",
"code": "XX000",
"file": "heapam.c",
"line": "3888",
"routine": "simple_heap_update",
"stack": "error: \n ..... [ the query above is printed here as well as the following:]-tuple
concurrently updated\n at Parser.parseErrorMessage
(/app/node_modules/pg-protocol/src/parser.ts:357:11)\n at
Parser.handlePacket (/app/node_modules/pg-protocol
/src/parser.ts:186:21)\n at Parser.parse (/app/node_modules/pg-
protocol/src/parser.ts:101:30)\n at Socket.<anonymous> (/app/node_modules/pg-protocol/src/index.ts:7:48)\n at Socket.emit (events.js:375:28)\n at Socket.emit (domain.js:470:12)\n at addChunk (internal/streams/readable.js:290:12)\n at
readableAddChunk (internal/streams/readable.js:265:9)\n at
Socket.Readable.push (internal/streams/readable.js:204:10)\n at
TCP.onStreamRead (internal/stream_base_commons.js:188:23)
it's simply not working, what's the reason behind this?
UPDATE: I also wrapped the function call with advisory locks
as suggested in the comment below, but, it's still giving me the above error when using large enough arrays:
BEGIN;
SELECT pg_advisory_xact_lock(2142616474639426746);
SELECT * FROM my_function();
COMMIT;
CREATE OR REPLACE FUNCTION my_function() ....
Upvotes: 1
Views: 499
Reputation: 1345
Thanks to @sudo's comment above, If I moved the statements like this:
BEGIN;
SELECT pg_advisory_xact_lock(2142616474639426746);
CREATE OR REPLACE FUNCTION my_function() ....
--the whole function definition is wrapped by an advisory lock
SELECT * FROM my_function();
COMMIT;
then it seems to avoid the problem, I've used large data(actually faked it with repeated loops through same data). Do note that it might not be the most efficient way to do this sort of thing, but, it works. The problem with the attempt given in the question was that the lock was probably only limited to the transaction it was defined in and hence the creation of function was still outside it's scope i.e. not locked, hence the conflict was appearing. But, all seems fine and dandy now.
Upvotes: 1