Robert Hargreaves
Robert Hargreaves

Reputation: 162

PostgreSQL: Underperforming query on large table with composite key

We have a table of 180m rows, 20 GB in size. Table DDL is:

create table app.table
(
    a_id    integer   not null,
    b_id    integer   not null,
    c_id    integer   not null,
    d_id    integer   not null,
    e_id    integer   not null,
    f_id    integer   not null,
    a_date  timestamp not null,
    date_added          timestamp,
    last_date_modified  timestamp default now()
);

Value distributions:

The primary key is a composite key:

alter table app.table add constraint table_pk primary key (a_id, b_id, c_id, d_id, e_id);

We're running a r6g.xlarge cluster in Aurora PostgreSQL v12.8. It's one instance with no other traffic hitting it. We've ran ANALYZE and VACUUM ANALYZE against the table:

INFO:  "table": scanned 30000 of 1711284 pages, containing 3210000 live
 rows and 0 dead rows; 30000 rows in sample, 183107388 estimated total rows

Problem

This query takes 9 seconds to run when shared_buffers is cold (or as cold as we can get it):

select a_id, b_id, c_id, d_id, a_date
from app.table ts
where a_id in ( <5000 values> )
and b_id = 34
and c_id in (2,3)
and d_id = 0

EXPLAIN output:

Index Scan using table_pk on table ts  (cost=0.57..419134.91 rows=237802 width=24) (actual time=8.335..9803.424 rows=5726 loops=1)
"  Index Cond: ((a_id = ANY ('{66986803,90478329,...,121697593}'::integer[])) AND (b_id = 34))"
"  Filter: (c_id = ANY ('{2,3}'::integer[])))"
  Rows Removed by Filter: 3
  Buffers: shared hit=12610 read=10593
  I/O Timings: read=9706.055
Planning:
  Buffers: shared hit=112 read=29
  I/O Timings: read=29.227
Planning Time: 33.437 ms
Execution Time: 9806.271 ms

We think this is unreasonably slow. When the query is ran again, and thus comes from cache, the time it takes is 25 ms. We'd rather not prewarm if possible.

In any case, we'd rather have better performance for this sort of query, around the 1-2 second mark if possible. Any ideas on how we could improve the performance?


EDIT - Effect of adding a covering index:

Tried adding a covering index to include the "a_date":

create unique index covering_idx on app.table (a_id, b_id, c_id, d_id, e_id) include (a_date)

EXPLAIN results after re-running the query (with cold shared_buffers cache):

Index Only Scan using covering_idx on table ts (cost=0.57..28438.58 rows=169286 width=24) (actual time=8.020..7028.442 rows=5658 loops=1)
  Index Cond: ((a_id = ANY ('{134952505,150112033,…,42959574}'::integer[])) AND (b_id = 34))
  Filter: ((e_id = ANY ('{0,0}'::integer[])) AND (c_id = ANY ('{2,3}'::integer[])))
  Rows Removed by Filter: 2
  Heap Fetches: 0
  Buffers: shared hit=12353 read=7733
  I/O Timings: read=6955.935
Planning:
  Buffers: shared hit=80 read=8
  I/O Timings: read=8.458
Planning Time: 11.930 ms
Execution Time: 7031.054 ms

Effect when using Bitmap Heap Scan vs. Index Scan:

We've discovered that we get a speed up when the query is executed using a Bitmap Heap Scan, rather than an Index Scan. We found this by forcing the plan using pg_hint_plan:

When adding /*+ BitmapScan(table) */:

Bitmap Heap Scan on table ts (cost=22912.96..60160.79 rows=9842 width=24) (actual time=3972.237..4063.417 rows=5657 loops=1)
  Recheck Cond: ((a_id = ANY ('{24933126,19612702,27100661,73628268,...,150482461}'::integer[])) AND (b_id = 34))
  Filter: ((d_id = ANY ('{0,0}'::integer[])) AND (c_id = ANY ('{2,3}'::integer[])))
 Rows Removed by Filter: 4
  Heap Blocks: exact=5644
  Buffers: shared hit=14526 read=11136
  I/O Timings: read=22507.527
  ->  Bitmap Index Scan on table_pk (cost=0.00..22898.00 rows=9842 width=0) (actual time=3969.920..3969.920 rows=5661 loops=1)
       Index Cond: ((a_id = ANY ('{24933126,19612702,27100661,,150482461}'::integer[])) AND (b_id = 34))
       Buffers: shared hit=14505 read=5513
       I/O Timings: read=3923.878
Planning:
  Buffers: shared hit=6718
Planning Time: 21.493 ms
{Execution Time: 4066.582 ms

Currently, we are thinking of forcing this plan in production using pg_hint_plan - but we'd rather know why the planner is opting for a less optimal plan! We have run VACUUM ANALYZE with default_statistics_target of 1000.

Upvotes: 10

Views: 1032

Answers (3)

utrucceh
utrucceh

Reputation: 1116

For your query:

select a_id, b_id, c_id, d_id, a_date
from app.table ts
where a_id in ( <5000 values> ) --lots of value
and b_id = 34 -- 1 value
and c_id in (2,3) -- 4 value
and d_id = 0 -- 1 value
--not including e_id so it will not in index => and e_id -- 1 value

Need to generate an index which columns ordered with min value count

CREATE INDEX INDX_TABLE_1 ON app.table (b_id, d_id, c_id, a_id) INCLUDE (a_date)

It will enough.

Also if you want you can compare performance with this index / Sometimes postgresql playing with in keyword differently :)

CREATE INDEX INDX_TABLE_2 ON app.table (b_id, d_id, c_id) INCLUDE (a_id, a_date)

Please share your test result

And also if your b_id, c_id, d_id... values will include small data set and will be in all queries ( like x_id = 1 ) , you can use PARTITIONED TABLES for better performance.

Upvotes: -1

Erwin Brandstetter
Erwin Brandstetter

Reputation: 656804

You are trying to optimize query performance on cold cache.

It's one instance with no other traffic hitting it. We've ran ANALYZE and VACUUM ANALYZE against the table

(Aside, ANALYZE alone adds nothing over VACUUM ANALYZE, so that's redundant.)

To optimize, minimize the number of data pages that have to be read. So ...

  1. ...decrease the storage size per row if possible. (With index-only scans, that's mostly just important for the involved index.)

  2. ... increase data locality: more qualifying tuples in the same data page means fewer pages to read.

Just reorder PK columns

You should get some improvement from simply re-ordering columns in your PK. You now have:

primary key (a_id, b_id, c_id, d_id, e_id)

With leading a_id. Index tuples for distinct a_id are spread out as much as possible. Exactly what your query does not need. You disclosed:

b_id has one value [...]
d_id has one value (currently)
e_id has one value (currently)
c_id has a range of 0-4
a_id has a range of 0-160,000,000

Reorder columns like this to maximize locality for your query:

ALTER TABLE app.table ADD CONSTRAINT table_pk PRIMARY KEY (b_id, d_id, e_id, c_id, a_id) INCLUDE (a_date);

(Plus, add INCLUDE (a_date) to allow index-only scans, as has been suggested already.)

Since b_id, and d_id / e_id (currently) are constants, those are just noise / ballast. The important part is to move c_id before d_id, this way, we never touch branches of the index with c_id IN (0,1,4), and more of our tuples end up on fewer index pages. It's a mild effect, since we seem to use like half of the spectrum anyway.

More radical

Since b_id is a constant, it shouldn't water down the PK to begin with. The same is true for d_id and e_id if they actually remain constants.

And we don't need e_id for our query at all.

This adapted query:

SELECT a_id, 34 AS b_id, c_id, 0 AS d_id, a_date
FROM   app.table ts
WHERE  c_id IN (2,3)
AND    a_id IN ( < 5000 VALUES > )

.. in combination with this index would be much better:

CREATE INDEX foo ON app.table (c_id, d_id) INCLUDE (a_date)

Probably better, yet:

SELECT a_id, 34 AS b_id, 2 AS c_id, 0 AS d_id, a_date
FROM   app.table ts
WHERE  c_id = 2
AND    a_id IN ( < 5000 VALUES > )
UNION ALL
SELECT a_id, 34 AS b_id, 3 AS c_id, 0 AS d_id, a_date
FROM   app.table ts
WHERE  c_id = 3
AND    a_id IN ( < 5000 VALUES > )

This allows index-only scans with only index conditions (Index Cond: in the query plan) and no filter (Filter:), for minimum cost.

Or even partial indexes for the last query:

CREATE INDEX foo_c2 ON app.table (d_id) INCLUDE (a_date) WHERE c_id = 2;
CREATE INDEX foo_c3 ON app.table (d_id) INCLUDE (a_date) WHERE c_id = 3;

Eliminates irrelevant rows from the tailored index(es) and makes index access slightly cheaper, yet.

Also, consider upgrading to Postgres 13 or Postgres 14 - with improvements for big data.

Consider the bottom part of the manual page "Index-Only Scans and Covering Indexes" for this!

Upvotes: 3

jjanes
jjanes

Reputation: 44192

This question might be pretty specific to Aurora, on which I don't have much experience.

Your index-only scan results are a bit surprising. I would not think it should not take 7733 buffer reads to obtain 5658 rows (plus 2 filtered out and 0 heap fetched). I would not expect it to need more than ~5700 reads. But I understand that Aurora's storage layer is pretty different than the community PostgreSQL, so maybe that has something to do with it. Anyway that is only a reduction of 25%, not the 10 fold you are looking for. EDIT: I realized those extra reads are of internal index pages. I had rejected this idea at first, because 2075 internal pages to 5658 leaf pages is a ridiculous ratio. But then I realized that the leaf pages read by that one query is a tiny fraction of all leaf pages which exist, while the internal pages read is probably the bulk of all of the internal pages which exist. This is probably a flaw in your testing method. To avoid caching the data unfairly, it would be enough to randomly pick a different 5000 a_id each time. Restarting the whole database (or whatever method you used to clear the cache) is way overkill. If it is not overkill because you really are restarting your production database between every query, well, stop doing that.

The read times of about 1ms per read seem rather slow for something using a good SSD layer (my own crappy one does that well), but I cant find any good data about what you should expect from Aurora's storage layer.

I am also curious about the row estimates being off by 30 to 50 fold. Why is that? It just shouldn't be that hard to come up with a more accurate estimate for this. But, I wouldn't think a different plan would be any faster, so the estimate really shouldn't matter. But you never know where a mystery will lead you. What if you just have the a_id IN-list and drop the rest of the column conditions? EDIT: I think I realized the answer to this, the PostgreSQL sampling method used to compute pg_stats.n_distinct is subtly biased in a way that can greatly underestimate n_distinct in the case of a very large table which is clustered on the column being sampled (a_id here), and n_distinct is very important to the selectivity estimate. Fortunately you can manually override this estimate using alter table app."table" alter a_id set (n_distinct = 9999999);. But again, that is not going to do much for you here because there is no better plan to be had. It might be important for other queries though.

But I think your bet course is to take a step back. Why are you running this query? What is the "business case" for it? Where is the list of 5000 ids coming from? Is there some pattern to them?

Upvotes: 2

Related Questions