Reputation: 4884
I have a table that have the following fields:
entry_id BIGSERIAL PRIMARY KEY,
site_id BIGINT NOT NULL,
uuid VARCHAR(256) NOT NULL,
session_start TIMESTAMP NOT NULL,
session_end TIMESTAMP NOT NULL,
user_ip VARCHAR(40) NOT NULL,
user_agent VARCHAR(256) NOT NULL,
Now, I have a lot of incoming requests that have tuples of data along the lines of (site_id, uuid, timestamp, user_ip, user_agent)
.
My rule is that if there is an entry in the database that is less 3 hours old (session_end), then the incoming request updates the session_end = timestamp
. If not, create a new entry (where session_start = session_end = timestamp
).
The incoming requests are handled by multiple processes. So say 3-4 incoming requests hit my servers with the same data (different timestamps, but milisecond-scale) and are handled by 3 different processes - how do I avoid creating 3 different records (if they all check at the same time, see no records matching and each create a new one)? It's a matter of a race condition and I'm not sure how to enforce it.
A table lock seems like overkill, as this is a write-heavy table, but what alternatives do I have outside of a 3rd party lock mechanism?
Example:
Format:
(site_id, uuid, timestamp, user_ip, user_agent)
Incoming requests / data:
(1, 123, 2014-01-01T10:00:32, '123.123.123.123', 'Mozilla/Chrome')
(1, 123, 2014-01-01T10:00:33, '123.123.123.123', 'Mozilla/Chrome')
(1, 123, 2014-01-01T10:00:34, '123.123.123.123', 'Mozilla/Chrome')
Result tuple:
entry_id | site_id | uuid | session_start | session_end | user_ip | user_agent
--------------------------------------------------------------------------------------------
<auto> | 1 | 123 | 2014-01-01T10:00:32 | 2014-01-01T10:00:34 | ... | ...
Upvotes: 2
Views: 229
Reputation: 1763
Check out advisory locks.
SELECT pg_advisory_lock(key);
// INSERT OR UPDATE...
SELECT pg_advisory_unlock(key);
Or using the nob-blocking version:
SELECT pg_try_advisory_lock(key) INTO :acquired;
// if (acquired) then INSERT OR UPDATE...
SELECT pg_advisory_unlock(key);
Upvotes: 1
Reputation: 125454
Create a gist
exclusion constraint with a timestamp range
type
create table request (
entry_id bigserial primary key,
site_id bigint not null,
uuid varchar(256) not null,
session_start timestamp not null,
session_end timestamp not null,
user_ip varchar(40) not null,
user_agent varchar(256) not null,
constraint session_overlap exclude using gist (
site_id with =,
uuid with =,
user_ip with =,
user_agent with =,
tsrange(session_end, session_end + interval '3 hours', '[)') with &&
)
);
Now that insert fails:
insert into request (site_id, uuid, session_start, session_end, user_ip, user_agent)
select site_id, uuid, ts::timestamp, ts::timestamp, user_id, user_agent
from (values
(1, '123', '2014-01-01T10:00:32', '123.123.123.123', 'Mozilla/Chrome'),
(1, '123', '2014-01-01T10:00:33', '123.123.123.123', 'Mozilla/Chrome'),
(1, '123', '2014-01-01T10:00:34', '123.123.123.123', 'Mozilla/Chrome')
) s(site_id, uuid, ts, user_id, user_agent)
;
ERROR: conflicting key value violates exclusion constraint "session_overlap"
DETAIL: Key (site_id, uuid, user_ip, user_agent, tsrange(session_end, session_end + '03:00:00'::interval, '[)'::text))=(1, 123, 123.123.123.123, Mozilla/Chrome, ["2014-01-01 10:00:33","2014-01-01 13:00:33")) conflicts with existing key (site_id, uuid, user_ip, user_agent, tsrange(session_end, session_end + '03:00:00'::interval, '[)'::text))=(1, 123, 123.123.123.123, Mozilla/Chrome, ["2014-01-01 10:00:32","2014-01-01 13:00:32")).
You may need to install the btree_gist
extension as superuser
create extension btree_gist;
http://www.postgresql.org/docs/current/interactive/btree-gist.html
Upvotes: 2