Reputation: 549
We have a store procedure for searching within joined Tables. We have a Production and Dev Sql servers. The store procedure runs fine on Production server, but on Dev SQL server, it takes more than 2 min to execute. I m considering ROW Number to be the root cause of the performance issue we are having on Dev server.
CREATE proc sp_fullEmp
@take int ,
@skip int
as
with main as (
select
dt.FirstName +' '+ dt.LastName [FullName],
dt.[Location],
emp.FirstName + ' '+ emp.LastName [Manager],
dt.[ApplicationDate]
from ExtractedData dt
left join ProcessTable tb
left join Employee emp on emp.ID = tb.EmployeeID
on td.ExtID = dt.ID),
searched as (select *, ROW_NUMBER() OVER (ORDER By FullName ASC) RN from main)
select * from searched where RN BETWEEN @skip AND @Take
Sample of same query is running on other Tables without any performance issue on both Dev and Production servers with a larger set of data.
What can be the root cause of this issue ?
Execution Dev Plan
Execution Prod Plan
Upvotes: 2
Views: 541
Reputation: 1181
Your problem might be parameter sniffing. SQL doesn't know what @Skip and @Take are going to be and this could be throwing the plan.
Its worth comparing the data in Prod & Dev - Often Dev data can be a lot more "skewy" especially if its being inserted manually. Look at see what are the Avg, min and Max
Try sticking a with RECOMPILE on the stored proc and see if this helps. Alternately - DBCC FREEPROCCACHE will empty the proc cache and create a new plan.
I just realized you are not partitioning your Row_Number() query - this means that in effect you are just running a TOP N query order by Fullname- seems strange as it will give random results based on the number of Extracts and employees
Upvotes: 1