Nidheesh
Nidheesh

Reputation: 4562

SQL ordered by Elapsed Time in AWR report

I was trying to analyze the AWR report generated for a particular process with a duration of one hour. I am trying to find out which query is taking much time to while running the process.

When I have gone through the report, I can see SQL ordered by Gets,SQL ordered by CPU Time,SQL ordered by Executions,SQL ordered by Parse Calls, SQL ordered by Sharable Memory,SQL ordered by Elapsed Time etc.

I can see the SQL Text from the table SQL ordered by Elapsed Time.

My question: Is this the right way to identify the expensive query ? Please advise in this regard.

Elapsed Time (s)    SQL Text
19,477.05            select abc.....
7,644.04             select def...

Upvotes: 2

Views: 18097

Answers (4)

Dinesh vishe
Dinesh vishe

Reputation: 3598

There is some parameter need to check so we are find issue in progress.

Buffer get is less expensive than physical read because database has to work harder (and more) to get the data. Basically time it would have taken if available in buffer cache + time actually taken to find out from physical block.

If you suspect that excessive parsing is hurting your database’s performance:

  1. check “time model statistics” section (hard parse elapsed time, parse time elapsed etc.)

  2. see if there are any signs of library cache contention in the top-5 events

  3. see if CPU is an issue.

Establishing a new database connection is also expensive (and even more expensive in case of audit or triggers). “Logon storms” are known to create very serious performance problems. If you suspect that high number of logons is degrading your performance, check “connection management elapsed time” in “Time model statistics”.

Soft Parsing being low indicates bind variable and versioning issues. With 99.25 % for the soft parse meaning that about 0.75 % (100 – soft parse) is happening for hard parsing. Low hard parse is good for us.

If Latch Hit % is <99%, you may have a latch problem. Tune latches to reduce cache contention.

Library hit % is great when it is near 100%. If this was under 95% we would investigate the size of the shared pool. In this ration is low then we may need to:

• Increase the SHARED_POOL_SIZE init parameter.

• CURSOR_SHARING may need to be set to FORCE.

• SHARED_POOL_RESERVED_SIZE may be too small.

• Inefficient sharing of SQL, PLSQL or JAVA code.

• Insufficient use of bind variables

Upvotes: 0

Seth Projnabrata
Seth Projnabrata

Reputation: 143

In SQL Ordered by Elapsed time you always need to check the Query which is having low Execution numbers and higher Elapsed time . This would always be the problematic Query . Since Elapsed time is the defined task for a respective Query in this case if it is higher with less number of Executions it means that for some reason the Query is performing not up to expectations .

Upvotes: 0

eliatou
eliatou

Reputation: 744

AWR is used to see database health. So, I think this is not the good tools to trace a process.

You should use other tools like sql_trace (with tkprof) or dbms_profiler. It will concenrate on your own process. If you are using sql_trace, you need to connect to the server (or ask to the dba tem) to analyse the trace.

Upvotes: 0

Nidheesh
Nidheesh

Reputation: 4562

SQL Ordered by Elapsed Time , includes SQL statements that took significant execution time during processing.We have to look at Executions,Elapsed time per Exec (s) etc. along with Elapsed time to analyze.

For example,a query has low Executions and high Elapsed time per Exec (s) and this query could be a candidate for troubleshooting or optimizations.

The best reference I found so far: http://www.dbas-oracle.com/2013/05/10-steps-to-analyze-awr-report-in-oracle.html

Upvotes: 1

Related Questions