Reputation: 103
I have table in postgres database. After the row reach 2 mio rows, the query become slower. This is my query
SELECT
c.source,
c.destination,
c.product_id,
sum(c.weight),
count(c.weight),
c.owner_id
FROM stock c
GROUP BY c.source, c.destination, c.product_id, c.owner_id;
I already add index
CREATE INDEX stock_custom_idx ON public.stock USING btree (source, destination, product_id, owner_id)
The query is very slow, so I do explain analyze, but the index not called.
So, how to optimize this query, because its take too long time and not return data?
Upvotes: 2
Views: 2950
Reputation: 2349
Try this index:
CREATE INDEX better_index ON public.stock USING btree
(source, destination, product_id, owner_id, weight);
If you do not include the weight
, this information still needs to be fetched from the table, thus you have a full table scan.
With the new index, you should have an index only scan. Also, the query planner can make use of the sorting order of the index for the grouping (just as it could have done with your index).
In newer versions of PostgreSQL, there would also exist the INCLUDE
clause, where you can "add" columns to an index without this having any impact on the sorting order (the data is there, but this part of the data is not sorted). This would make the index yet another bit more performant for your query, I guess.
Upvotes: 2