Jure
Jure

Reputation: 1176

SqlServer strange SQL statement performance

I have a strange problem with some specific sql statement performance. I have an sql statement (contents, I think is not important). When running through .NET application (using EF as a DB access; SQL gets parameterized) and catching data with SQL Profiler I get following results:

However, when running the same SQL statement through SSMS (first executing set statistics io on and set statistics time on) I get following results:

Table 'mytable'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times: CPU time = 16 ms, elapsed time = 138 ms.

Both SQLs are executed under same user context, but obviously different connection. As you can see, running the same query through SSMS is way faster than running it through application.

Where should I look for differences, e.g. what triggers different reads and duration? Can it be, that the problem is parameter sniffing (I don't see how, although)?

Upvotes: 0

Views: 179

Answers (2)

JERRY
JERRY

Reputation: 1173

This query will help you to get multiple execution plan. SELECT * FROM sys.dm_exec_procedure_stats WHERE object_id = ?

Check the SSMS SET options: Tools > Options > Query Execution > SQL Server > Advanced

You can see the difference between both of the executions with below query: SELECT * FROM sys.dm_exec_plan_attributes (plan_handle)

Option----->Value

  • ANSI_PADDING----->1

  • Parallel Plan----->2

  • FORCEPLAN----->4

  • CONCAT_NULL_YIELDS_NULL----->8

  • ANSI_WARNINGS----->16

  • ANSI_NULLS----->32

  • QUOTED_IDENTIFIER----->64

  • ANSI_NULL_DFLT_ON----->128

  • ANSI_NULL_DFLT_OFF----->256
  • NoBrowseTable Indicates that the plan does not use a work table to implement a FOR BROWSE operation.----->512
  • TriggerOneRow Indicates that the plan contains single row optimization for AFTER trigger delta tables.----->1024
  • ResyncQuery Indicates that the query was submitted by internal system stored procedures.----->2048
  • ARITH_ABORT----->4096
  • NUMERIC_ROUNDABORT----->8192
  • DATEFIRST----->16384
  • DATEFORMAT----->32768
  • LanguageID----->65536
  • UPON Indicates that the database option PARAMETERIZATION was set to FORCED when the plan was compiled.----->131072
  • ROWCOUNT - Applies To: SQL Server 2012 to SQL Server 2016----->262144

How to count this values and options?

  • From the result table we would get the first value. Here it is 4345.

  • Next from the table we need to the find the highest value next to 4345. In this case it is 4096.

  • Once we have these two values we need to find the difference of them. That is difference of 4345 and 4096, which would result in 249.

  • We need to follow the above steps till end. The difference in values are options which are used for execution plan of that particular Procedure.

Upvotes: 0

Radu Gheorghiu
Radu Gheorghiu

Reputation: 20489

The problem is because the execution plans generated in the two executions, application / SSMS, are different.

As you assumed, the problem is parameters sniffing.

Why is that, even if the query you're running is the same?

Well, because apart from looking for a execution plan in the plan cache for a executed query, SQL Server also looks at session settings (ANSI_NULLS, ARITHABORT etc.). The default session settings in these two scenarios are different.

I believe .NET EF parameters have ARITHABORT set to OFF whereas in SSMS it's ON (have a look at this article - Slow in the application, fast in SSMS).

So, in the end you will have in your plan cache 2 different execution plans for the same query, but for different session parameters.

I would start to look for re-writing this query. I've recently had the same problem and I've tried matching session parameters in both sessions (application / SSMS) and for a while the execution plan that was generated was fine.

But some time after, when the plan got pushed out of the plan cache a new sub-optimal plan was generated and continued to be used (parameter sniffing again), so this is quite a vicious cycle and I don't recommend it as a long-term solution.

As a long-term solution I would recommend re-writing your query / queries and maybe add some proper indexes if you don't already have them in place.

Upvotes: 1

Related Questions