Naveenraja Subramaniam
Naveenraja Subramaniam

Reputation: 365

How to reduce the execution time of the following query in Oracle?

I'd written the following SQL query in Oracle 11g.

SELECT p.matchcode      pmatchcode,
       p1.matchcode     p1matchcode,
       p.digits         digit,
       p1.effectivedate peff,
       p1.expirydate    pexp,
       p.expirydate     p1exp,
       p.tariff_id      tariff_id
  FROM tt_matchcodes_view p1
  JOIN tt_matchcodes_view p
    on p.tariff_id = p1.tariff_id
   AND p.type_id = p1.type_id
   AND p1.Digits = p.Digits
   AND p.matchcode <> p1.matchcode
   AND p1.EffectiveDate < p.expirydate
   AND (p1.expirydate IS NULL OR p1.expirydate > p.expirydate)
   AND substr(p.matchcode, 0, length(p1.matchcode)) = p1.matchcode;

The tt_matchcodes_view table has 71392 records. I've TWO indexes created on that table on the fields matchcode and digits. It is taking more than 10 minutes to execute. Is there anyway to reduce the execution time.

Sample Table DATA:

MATCHCODE   DIGITS  DEST_ID MATCH   EFFECTIVEDATE   EXPIRYDATE  INHERITED   TARIFF_ID   TYPE_ID
1787    1787    73999   1   01/03/2012      0   22  1
1787201 1787    73999   0   01/03/2012      -1  22  1
1787202 1787    73999   0   01/03/2012      -1  22  1
1787203 1787    73999   0   01/03/2012      -1  22  1
1787204 1787    73999   0   01/03/2012      -1  22  1
1787205 1787    73999   0   01/03/2012      -1  22  1
1787206 1787    73999   0   01/03/2012      -1  22  1
1787207 1787    73999   0   01/03/2012      -1  22  1
1787208 1787    73999   0   01/03/2012      -1  22  1
1787212 1787    73999   0   01/03/2012      -1  22  1

Execution PLAN:

OPERATION   OPTIONS OBJECT_NAME OBJECT_INSTANCE OPTIMIZER   ID  PARENT_ID   DEPTH   POSITION    COST    CARDINALITY BYTES   CPU_COST    IO_COST
SELECT STATEMENT                ALL_ROWS    0       0   703 703 3   501 83322403    698
HASH JOIN                   1   0   1   1   703 3   501 83322403    698
TABLE ACCESS    FULL    TT_MATCHCODES_VIEW  2       2   1   2   1   95  65498   5174342 22711001    94
TABLE ACCESS    FULL    TT_MATCHCODES_VIEW  1       3   1   2   2   95  65498   5763824 22711001    94

Thx in advance.

Upvotes: 0

Views: 4604

Answers (2)

APC
APC

Reputation: 146209

Your table has over 70000 rows. You are selecting all its records twice and comparing rows on the basis of non-equality. So basically you're comparing every row against every other row in the table. (Not actually all of them, because not the ones where TARIFF_ID or TYPE_ID or DIGITS don't match, but that doesn't appear to be many of them) That's ~490,000,000 comparisons. Ten minutes execution time doesn't seem too bad under the circumstances.

The explain plan shows that Oracle has chosen the best plan it can. All you can do to impro ve it would be to give Oracle a more useful index. A compound index which uses all the columns in the where clause might help. Something like this:

create index super_match_idx on tt_matchcodes_view
    (tariff_id, .type_id, digits, matchcode, expirydate, effectivedate )    

That might give you two FULL FAST SCANS on the index, which should be speedier than two FULL TABLE SCAN operations.

Incidentally, are you sorting the data when you populate the temporary table? Using an ORDER BY which aligns with the index would improve the clustering factor. So you might you get faster retrievals because all the matching rows are more likely to be in contiguous blocks.

I don't normally advise the sorting of rows in heap tables, but as you're already paying the overhead of inserting into a temporary table you might as well get as much juice in return.

Oh, and substr(p.matchcode, 0, length(p1.matchcode)) is a dead giveaway. Smart keys are Teh Suck! Anyway, is there ever case where that SUBSTR() call returns a value which doesn't match DIGITS? (Again your sample data is ambiguous.) If DIGITS relaibly identifies the output of the SUBTSR() I suggest you ditch that last line.

Upvotes: 2

Dave Costa
Dave Costa

Reputation: 48111

Since this is a global temporary table, you must be populating it and then executing the query within the same session, or even transaction. That makes me wonder if Oracle has statistics on the table gathered when it was empty, that do not reflect its actual contents. You might try gathering statistics just before running your query. If that works, and the contents of the table are not expected to be very different from one run to the next, you might want to just pin those statistics so they won't be replaced.

However, based on the information you've given so far, I'm not sure there is a better plan Oracle could come up with. An index on matchcode alone is not likely to be usable for this query, given the conditions on it in the predicate. The index on digits could be used, but since you are joining the table to itself, that would likely be less efficient than just doing the two full scans, since there will always be matches on digits (except when it's NULL, if ever).

We'd need to know more details about which conditions in the predicates filter out the most rows to make more suggestions. Assuming that the inequality on matchcode is the main filter, you might get some benefit out of a single index on (digits,matchcode) -- it could potentially join the index with itself and eliminate a lot of rows before going to the table at all.

Upvotes: 1

Related Questions