JustAFriend
JustAFriend

Reputation: 41

Extremely slow query on 1st run, even with indexes

I have an extremely slow query that is slow despite indexes being used(on the order of 1-3 minutes). Similar queries will be run 4-6 times by the user, so speed is critical.

QUERY:

            SELECT SUM(bh.count) AS count,b.time AS batchtime
            FROM
              batchtimes AS b
            INNER JOIN batchtimes_headlines AS bh ON b.hashed_id = bh.batchtime_hashed_id
            INNER JOIN headlines_ngrams AS hn ON bh.headline_hashed_id = hn.headline_hashed_id
            INNER JOIN ngrams AS n ON hn.ngram_hashed_id = n.hashed_id
            INNER JOIN homepages_headlines AS hh ON bh.headline_hashed_id = hh.headline_hashed_id
            INNER JOIN homepages AS hp ON hh.homepage_hashed_id = hp.hashed_id
            WHERE
              b.time IN (SELECT * FROM generate_series('2013-10-10 20:00:00.000000'::timestamp,'2014-02-16 20:00:00.000000'::timestamp,'1 hours'))
              AND ( n.gram = 'a' )
              AND hp.url = 'www.abcdefg.com'
            GROUP BY
              b.time
            ORDER BY
              b.time ASC;

EXPLAIN ANALYZE after very first run:

GroupAggregate  (cost=6863.26..6863.79 rows=30 width=12) (actual time=90905.858..90908.889 rows=3039 loops=1)
 ->  Sort  (cost=6863.26..6863.34 rows=30 width=12) (actual time=90905.853..90906.971 rows=19780 loops=1)
     Sort Key: b."time"
     Sort Method: quicksort  Memory: 1696kB
     ->  Hash Join  (cost=90.16..6862.52 rows=30 width=12) (actual time=378.784..90890.636 rows=19780 loops=1)
           Hash Cond: (b."time" = generate_series.generate_series)
           ->  Nested Loop  (cost=73.16..6845.27 rows=60 width=12) (actual time=375.644..90859.059 rows=22910 loops=1)
                 ->  Nested Loop  (cost=72.88..6740.51 rows=60 width=37) (actual time=375.624..90618.828 rows=22910 loops=1)
                       ->  Nested Loop  (cost=42.37..4391.06 rows=1 width=66) (actual time=368.993..54607.402 rows=1213 loops=1)
                             ->  Nested Loop  (cost=42.23..4390.18 rows=5 width=99) (actual time=223.681..53051.774 rows=294787 loops=1)
                                   ->  Nested Loop  (cost=41.68..4379.19 rows=5 width=33) (actual time=223.643..49403.746 rows=294787 loops=1)
                                         ->  Index Scan using by_gram_ngrams on ngrams n  (cost=0.56..8.58 rows=1 width=33) (actual time=17.001..17.002 rows=1 loops=1)
                                               Index Cond: ((gram)::text = 'a'::text)
                                         ->  Bitmap Heap Scan on headlines_ngrams hn  (cost=41.12..4359.59 rows=1103 width=66) (actual time=206.634..49273.363 rows=294787 loops=1)
                                               Recheck Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
                                               ->  Bitmap Index Scan on by_ngramhashedid_headlinesngrams  (cost=0.00..40.84 rows=1103 width=0) (actual time=143.430..143.430 rows=294787 loops=1)
                                                     Index Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
                                   ->  Index Scan using by_headlinehashedid_homepagesheadlines on homepages_headlines hh  (cost=0.56..2.19 rows=1 width=66) (actual time=0.011..0.011 rows=1 loops=294787)
                                         Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
                             ->  Index Scan using by_hashedid_homepages on homepages hp  (cost=0.14..0.17 rows=1 width=33) (actual time=0.005..0.005 rows=0 loops=294787)
                                   Index Cond: ((hashed_id)::text = (hh.homepage_hashed_id)::text)
                                   Filter: ((url)::text = 'www.abcdefg.com'::text)
                                   Rows Removed by Filter: 1
                       ->  Bitmap Heap Scan on batchtimes_headlines bh  (cost=30.51..2333.86 rows=1560 width=70) (actual time=7.977..29.674 rows=19 loops=1213)
                             Recheck Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
                             ->  Bitmap Index Scan on by_headlinehashedid_batchtimesheadlines  (cost=0.00..30.12 rows=1560 width=0) (actual time=6.595..6.595 rows=19 loops=1213)
                                   Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
                 ->  Index Scan using by_hashedid_batchtimes on batchtimes b  (cost=0.28..1.74 rows=1 width=41) (actual time=0.009..0.009 rows=1 loops=22910)
                       Index Cond: ((hashed_id)::text = (bh.batchtime_hashed_id)::text)
           ->  Hash  (cost=14.50..14.50 rows=200 width=8) (actual time=3.130..3.130 rows=3097 loops=1)
                 Buckets: 1024  Batches: 1  Memory Usage: 121kB
                 ->  HashAggregate  (cost=12.50..14.50 rows=200 width=8) (actual time=1.819..2.342 rows=3097 loops=1)
                       ->  Function Scan on generate_series  (cost=0.00..10.00 rows=1000 width=8) (actual time=0.441..0.714 rows=3097 loops=1)

Total runtime: 90911.001 ms

EXPLAIN ANALYZE after 2nd run:

GroupAggregate  (cost=6863.26..6863.79 rows=30 width=12) (actual time=3122.861..3125.796 rows=3039 loops=1)
 ->  Sort  (cost=6863.26..6863.34 rows=30 width=12) (actual time=3122.857..3123.882 rows=19780 loops=1)
     Sort Key: b."time"
     Sort Method: quicksort  Memory: 1696kB
     ->  Hash Join  (cost=90.16..6862.52 rows=30 width=12) (actual time=145.396..3116.467 rows=19780 loops=1)
           Hash Cond: (b."time" = generate_series.generate_series)
           ->  Nested Loop  (cost=73.16..6845.27 rows=60 width=12) (actual time=142.406..3102.864 rows=22910 loops=1)
                 ->  Nested Loop  (cost=72.88..6740.51 rows=60 width=37) (actual time=142.395..3011.768 rows=22910 loops=1)
                       ->  Nested Loop  (cost=42.37..4391.06 rows=1 width=66) (actual time=142.229..2969.144 rows=1213 loops=1)
                             ->  Nested Loop  (cost=42.23..4390.18 rows=5 width=99) (actual time=135.799..2142.666 rows=294787 loops=1)
                                   ->  Nested Loop  (cost=41.68..4379.19 rows=5 width=33) (actual time=135.768..437.824 rows=294787 loops=1)
                                         ->  Index Scan using by_gram_ngrams on ngrams n  (cost=0.56..8.58 rows=1 width=33) (actual time=0.030..0.031 rows=1 loops=1)
                                               Index Cond: ((gram)::text = 'a'::text)
                                         ->  Bitmap Heap Scan on headlines_ngrams hn  (cost=41.12..4359.59 rows=1103 width=66) (actual time=135.732..405.943 rows=294787 loops=1)
                                               Recheck Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
                                               ->  Bitmap Index Scan on by_ngramhashedid_headlinesngrams  (cost=0.00..40.84 rows=1103 width=0) (actual time=72.570..72.570 rows=294787 loops=1)
                                                     Index Cond: ((ngram_hashed_id)::text = (n.hashed_id)::text)
                                   ->  Index Scan using by_headlinehashedid_homepagesheadlines on homepages_headlines hh  (cost=0.56..2.19 rows=1 width=66) (actual time=0.005..0.005 rows=1 loops=294787)
                                         Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
                             ->  Index Scan using by_hashedid_homepages on homepages hp  (cost=0.14..0.17 rows=1 width=33) (actual time=0.003..0.003 rows=0 loops=294787)
                                   Index Cond: ((hashed_id)::text = (hh.homepage_hashed_id)::text)
                                   Filter: ((url)::text = 'www.abcdefg.com'::text)
                                   Rows Removed by Filter: 1
                       ->  Bitmap Heap Scan on batchtimes_headlines bh  (cost=30.51..2333.86 rows=1560 width=70) (actual time=0.015..0.031 rows=19 loops=1213)
                             Recheck Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
                             ->  Bitmap Index Scan on by_headlinehashedid_batchtimesheadlines  (cost=0.00..30.12 rows=1560 width=0) (actual time=0.013..0.013 rows=19 loops=1213)
                                   Index Cond: ((headline_hashed_id)::text = (hn.headline_hashed_id)::text)
                 ->  Index Scan using by_hashedid_batchtimes on batchtimes b  (cost=0.28..1.74 rows=1 width=41) (actual time=0.003..0.004 rows=1 loops=22910)
                       Index Cond: ((hashed_id)::text = (bh.batchtime_hashed_id)::text)
           ->  Hash  (cost=14.50..14.50 rows=200 width=8) (actual time=2.982..2.982 rows=3097 loops=1)
                 Buckets: 1024  Batches: 1  Memory Usage: 121kB
                 ->  HashAggregate  (cost=12.50..14.50 rows=200 width=8) (actual time=1.771..2.311 rows=3097 loops=1)
                       ->  Function Scan on generate_series  (cost=0.00..10.00 rows=1000 width=8) (actual time=0.439..0.701 rows=3097 loops=1)

Total runtime: 3125.985 ms

I have a 32GB server. Here are the modifications to postgresql.conf:

DB has recently been Vacuumed, re-indexed, and analyze.

Any suggestions for how to tune this query?

Upvotes: 4

Views: 499

Answers (1)

Nikhil Mahajan
Nikhil Mahajan

Reputation: 54

This may or may not answer to your question. i cannot comment above, since i dont have 50 rep's as per Stack overflow. :/

My first question is why Inner Join..? This will return you unwanted Columns in your Inner join result. For example in your query when you inner join

INNER JOIN headlines_ngrams AS hn ON bh.headline_hashed_id = hn.headline_hashed_id

The result will have two columns with same information which is redundant. so for example if you have 100,000,000 rows, you will have bh.headline_hashed_id and hh.headline_hashed_id 100,000,000 entries in each column. in your query above you are joining 5 tables. Plus you are interested in only

SELECT SUM(bh.count) AS count,b.time AS batchtime

so i belive you to use Natural join.

[link] (http://en.wikipedia.org/wiki/Inner_join#Inner_join)

The reason that i can think of why in second attempt you are getting a improved performance is due to cache. People have mentioned above to use temporary table for Generate_series which could be a good option. Plus if you think of using WITH in your query then, you should read this article. link

Upvotes: 1

Related Questions