Reputation: 1853
I'm modeling a case where batches of products arrive and are consumed over time.
The data looks something like the following:
| id | batch_id | date | quantity |
|----|----------|------------|----------|
| 1 | 1 | 2020-01-01 | 100 |
| 2 | 1 | 2020-01-01 | -5 |
| 3 | 1 | 2020-01-03 | -3 |
| 4 | 2 | 2020-02-01 | 50 |
| 5 | 2 | 2020-02-03 | -1 |
Each batch starts off with some quantity, which is consumed over time.
I want to ensure that any negative quantities will always be after the initial quantity (the positive value) for each batch.
For this I've used the following GiST index:
alter table batch_units
add constraint batch_units_date_overlap_check
exclude using gist (
batch_id with =,
(
case when quantity >= 0
then daterange('-infinity'::date, date, '()')
else daterange(date, date, '[]')
end
) with &&
);
The works for almost every case, but doesn't allow removing from the same batch twice in the same day. It would be ideal if it allowed for this.
I'm not sure if I'm approaching the problem correctly here. Does anyone have any suggestions?
Upvotes: 1
Views: 124
Reputation: 521168
If you wanted to find batches which were valid, i.e. ones which do not start off with a negative number, you may use the following exists logic:
SELECT DISTINCT batch_id
FROM batch_units bu1
WHERE
quantity > 0 AND
NOT EXISTS (SELECT 1 FROM batch_units bu2
WHERE bu2.batch_id = bu1.batch_id AND bu2.date < bu1.date AND
bu2.quantity < 0);
The above query should benefit from the following standard B-tree index:
CREATE INDEX idx ON batch_units (batch_id, date, quantity);
This should allow Postgres to just do a scan of the batch_units
table, and then quicky lookup each record against the index.
Upvotes: 1