Reputation: 6368
I've been digging through stackover flow and it seems odd that a powerful database management software such as postgres does not have a simple way to provide this.
The best solution I have so far is
id serial primary key,
,
but in the event that you want to do an
INSERT VALUE ON CONFLICT DO NOTHING
(I'm adding this last part b/c postgres cannot do INSERT OR IGNORE
), the sequence will count on even after insertions failed.
So The table you get might look something like:
id | name
1 | amy
5 | bob
12 | john
104 | thomas
where the in between indexes were failed insertions. The app works, but I find this annoying. Suggestions?
Upvotes: 0
Views: 92
Reputation: 5697
To build on @Gordon Linoff 's comment, this behaviour is a vital part of scalability by allowing the DB to process inserts concurrently across connections.
Without it, to achieve the no gap scenario, every transaction would have to complete or fail in its entirety before the next ID could be assigned to any table. DBs that use the concept of a SEQUENCE
would not be able to cache blocks of values, meaning every allocation of a sequence number (for whatever reason) would require a lock which in turn could block.
In both cases, in order to guarantee the next value, the DB cannot proceed until nothing else is pending. You'd basically end up with a giant queue of serialised inserts and they database would not scale, at least for inserts.
All databases work like this and this is the reason why. I hope that helps you understand, even if you still find it annoying. The only real way around it is to allocate a true sequence number after the event in a big batch or as you SELECT
but you'll generally find out it's rarely if ever worth doing.
Upvotes: 2