Reputation: 895
I have a very large table with 100M+ rows. I am trying to find if there is a faster way to execute the following.
Query:
SELECT *
FROM "public".example
WHERE a = "foo" and b = "bar"
order by c /* could be any of fields c to z */
limit 100;
Here is the table and indexes I have setup now.
Table:
Indexes:
"example_multi_idx" btree (a, b)
"c_idx" btree (c)
Thoughts:
c
then an index of "example_multi_idx_with_c" btree (a, b, c)
performs wonderfully. However, if I throw in a variety of sort by's, then I would need to create n number of multi-key indexes, which seems wasteful.Upvotes: 1
Views: 2539
Reputation: 1520
Having an index directly on the order by column will work in most cases. Because Postgres can then walk on the order by column index and match each row with the filters you provide and pick the first 100.
Upvotes: 0
Reputation: 47444
How large are the groups once you've filtered by a
and b
? While including c
in the index will certainly help improve performance, if your groups are not particularly large then the sorting at the end of the operation shouldn't have a big cost.
Are you having performance issues with your current indexing?
Upvotes: 1
Reputation: 1269553
For this query:
SELECT *
FROM "public".example
WHERE a = "foo" and b = "bar"
order by c /* could be any of fields c to z */
limit 100;
The optimal index is example(a, b, c)
. Postgres should be able to use the index for sorting.
If you want to have multiple possible columns for the order by
, you need a separate index for each one.
Upvotes: 2