Ashok Kumar Dabbadi
Ashok Kumar Dabbadi

Reputation: 265

Sql Query Is Having Different Execution Elapsed Times In Oracle

I have a query which is running fine and giving me output, Here the problem is, same query is taking different elapsed times to get comeplete it's run. The avereage elapased time is 10 mins, but some times it is taking more than a hour and query using sql_profile to get the best execution plan and it is forced every time by DBA.

INSERT INTO DATA_UPDATE_EVENT
(
  structure_definition_id,
  eod_run_id,
  publish_group_name,
  JMSDBUS_DESTINATION,
  dbaxbuild_location,
  LOCATION,
   original_data_type)
  SELECT  DISTINCT eod_structure_definition_id,
   p_eod_run_id,
   p_publish_group,
   pg.JMSDBUS_DESTINATION,
   pg.dbaxbuild_location,
   pg.LOCATION,
   sd.DATA_TYPE
  FROM
   PUBLISH_GROUP pg,
   STRUCTURE_EOD_MAPPING sem,
   WATCH_LIST_STRUCTURE wls,
   STRUCTURE_DEFINITION sd
  WHERE  pg.publish_group_name = sem.publish_group_name
  AND sem.publish_group_name = p_publish_group
  AND wls.structure_definition_id = sem.structure_definition_id
  AND wls.watch_list_id IN (SELECT watch_list_id
                            FROM TMP_WATCHLIST)
  AND sd.structure_definition_id = sem.structure_definition_id
  AND (sd.defcurve_name IS NULL
                                OR sd.defcurve_name IN (SELECT curve_shortname
                                                        FROM
                                                        DEFCURVE_CURRENT
                                                        WHERE  CURVE_STATUS = 'live')
       )
   AND (sd.generic_class_name is null
   or sd.generic_class_name <> 'CREDIT' 
                        or (
                             sd.generic_class_name = 'CREDIT'
                             and generic_name in
                             (
                               select generic_name
                               from 
                               analytic_object ao,
                               analytic_object_instance aoi,
                               analytic_object_property aop,
                               defcurve_current dc
                               where ao.analytic_object_id = aoi.analytic_object_id
                               and aop.analytic_object_instance_id  =aoi.analytic_object_instance_id
                                and AOP.PROPERTY_NAME = 'CreditObjectName'
                                and aop.prop_value1 = dc.curve_shortname
                                and aop.effective_to > systimestamp
                                and aop.effective_from < systimestamp
                                and dc.curve_status = 'live'
                                and aoi.analytic_object_instance_id in
                                 (select analytic_object_instance_id
                                  from 
                                  analytic_object_property
                                  where property_name = 'CreditObjectType'
                                 )
        )
   )
); 

Please take the execution plan for the above query

Execution Plan
------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name                        | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                             |                             |       |       |   232 (100)|
|   1 |  VIEW                                        | VW_DIS_1                    |     1 |  4052 |   232   (1)| 00:00:
|   2 |   SORT UNIQUE                                |                             |     1 |   186 |   232   (1)| 00:00:
|   3 |    FILTER                                    |                             |       |       |            |
|   4 |     NESTED LOOPS                             |                             |     1 |   186 |   231   (0)| 00:00:
|   5 |      NESTED LOOPS                            |                             |     1 |   158 |   229   (0)| 00:00:
|   6 |       NESTED LOOPS                           |                             |     1 |   132 |   228   (0)| 00:00:
|   7 |        NESTED LOOPS                          |                             |    15 |  1830 |     3   (0)| 00:00:
|   8 |         TABLE ACCESS BY INDEX ROWID          | PUBLISH_GROUP               |     1 |   109 |     1   (0)| 00:00:
|   9 |          INDEX UNIQUE SCAN                   | PK$PUBLISHGROUP             |     1 |       |     0   (0)|
|  10 |         TABLE ACCESS FULL                    | TMP_WATCHLIST               |    15 |   195 |     2   (0)| 00:00:
|  11 |        INDEX RANGE SCAN                      | PK$WATCHLISTSTRUCTURE       |  1322 | 13220 |    15   (0)| 00:00:
|  12 |       INDEX UNIQUE SCAN                      | PK$STRUCTURE_EOD_MAPPING    |     1 |    26 |     1   (0)| 00:00:
|  13 |      TABLE ACCESS BY INDEX ROWID             | STRUCTURE_DEFINITION        |     1 |    28 |     2   (0)| 00:00:
|  14 |       INDEX UNIQUE SCAN                      | PK$STRUCTURE_DEFINITION     |     1 |       |     1   (0)| 00:00:
|  15 |     TABLE ACCESS BY INDEX ROWID              | DEFCURVE_CURRENT            |     1 |    22 |     2   (0)| 00:00:
|  16 |      INDEX UNIQUE SCAN                       | PK$DEFCURVE_CURRENT         |     1 |       |     1   (0)| 00:00:
|  17 |      PX COORDINATOR                          |                             |       |       |            |
|  18 |       PX SEND QC (RANDOM)                    | :TQ10000                    |     1 |   118 |  5023   (1)| 00:01:
|  19 |        NESTED LOOPS                          |                             |     1 |   118 |  5023   (1)| 00:01:
|  20 |         NESTED LOOPS                         |                             |     1 |    96 |  5023   (1)| 00:01:
|  21 |          NESTED LOOPS                        |                             |   135 |  6480 |  4939   (1)| 00:01:
|  22 |           NESTED LOOPS                       |                             |  8343 |   236K|  1228   (1)| 00:00:
|  23 |            PX BLOCK ITERATOR                 |                             |       |       |            |
|  24 |             TABLE ACCESS FULL                | ANALYTIC_OBJECT             |    28 |   504 |   231   (1)| 00:00:
|  25 |            TABLE ACCESS BY GLOBAL INDEX ROWID| ANALYTIC_OBJECT_INSTANCE    |   300 |  3300 |   298   (0)| 00:00:
|  26 |             INDEX RANGE SCAN                 | UQ$ANALYTIC_OBJECT_INSTANCE |   300 |       |     3   (0)| 00:00:
|  27 |           INDEX UNIQUE SCAN                  | PK$ANALYTIC_OBJECT_PROPERTY |     1 |    19 |     2   (0)| 00:00:
|  28 |          TABLE ACCESS BY GLOBAL INDEX ROWID  | ANALYTIC_OBJECT_PROPERTY    |     1 |    48 |     3   (0)| 00:00:
|  29 |           INDEX UNIQUE SCAN                  | PK$ANALYTIC_OBJECT_PROPERTY |     1 |       |     2   (0)| 00:00:
|  30 |         TABLE ACCESS BY INDEX ROWID          | DEFCURVE_CURRENT            |     1 |    22 |     1   (0)| 00:00:
|  31 |          INDEX UNIQUE SCAN                   | PK$DEFCURVE_CURRENT         |     1 |       |     0   (0)|
------------------------------------------------------------------------------------------------------------------------ 

Note
-----
  - dynamic sampling used for this statement
  - SQL profile "SYS_SQLPROF_01505a8ce6144000" used for this statement

Can some one please suggest how to achive this with good steps and what we need to ask DBA to provide the information.

Upvotes: 1

Views: 2109

Answers (2)

Jon Heller
Jon Heller

Reputation: 36807

Here are some ideas:

  1. Bind variable? Is p_publish_group a bind variable? If so, is there a histogram on sem.publish_group_name? It looks like this query returns vastly different amounts depending on the input. Adaptive cursor sharing might help, but that would require a histogram.
  2. Bad profile? The cardinality estimates are horrible. If this statement ran for an hour then I would assume there are many millions of rows. But Oracle only expects 1 rows, even with the SQL Profile. Was the profile created on a very small data set and applied on a much larger data set?
  3. Full hints? Even if your optimizer statistics are accurate I bet the cardinality would still be far off. This may be one of those difficult queries that is so weird it requires a large amount of hints. For example, something like /*+ full(pg) full(sem) use_hash(pg sem) ... */. Indexes and nested loops work fine for small amount of data. But if this query runs for an hour then it likely needs full table scans and hash joins.
  4. SQL Monitoring. Run select dbms_sqltune.report_sql_monitor(sql_id => '<your sql id>', type => 'text') from dual; to find out what the query is actually doing and how much time is spent on each operation. The explain plans are only estimates, you need to know what's really happening. I bet when you run this you'll see a few long-running steps where Estimated Rows = 1 and Actual Rows = 1000000.

Upvotes: 1

APC
APC

Reputation: 146209

Either you or your DBA need to understand your data and your system. This is the most basic principle of tuning.

Queries will perform predictably, provided the environment is stable. If run times vary wildly then you need to find what is different. Erratically poor performance may be due to lots of other users contending for system resource at particular times or it might be due to variations in the volume or nature of the data. There are other possibilities too, but those are the ones to start with.

Your DBA should already monitor the database usage. But if they aren't they need to start right now. As it doesn't seem likely your organization is paying for the Diagnostics option you can use Statspack for this. Find out more.

As for data variation, there is one clue in the posted code:

AND wls.watch_list_id IN (SELECT watch_list_id
                            FROM TMP_WATCHLIST)

Assuming you adhere to a sensible naming convention (and years of SO have convinced me to be wary of such assumptions) then TMP_WATCHLIST is a temporary table. Which suggests it could hold different data and different volumes of data each time you run the query. If that is the case that would be a good place to start. Depending on the precise problem possible solutions include dynamic sampling, fixed stats or a cardinality hint.

Upvotes: 1

Related Questions