Reputation: 885
I have some SQL that runs quickly on its own, but is 60 times slower when the same SQL is inside a stored proc. The SQL uses temp tables in many of its queries. Why is it slower in a sp?
Upvotes: 1
Views: 2082
Reputation: 34344
tl;dr
After populating the (temp) table then consider a) running create index
and/or b) running update statistics
on the table; both of these provide the following benefits:
Hard to say with 100% certainty without a minimal, reproducible example (to include ASE version and query plans) but some ideas to consider:
Regardless of why the proc's query(s) is run with a less-than-optimal query plan the general 'solution' is to make sure the query(s) is compiled with some up-to-date info about the temp table.
How/when to (re)compile the proc's query(s) depends on ASE version and any number of associated scenarios:
deferred name resolution
, procedure deferred compilation
and optimize temp table resolution
which can dictate when individual queries, within a proc, are compiled; but even this may not be enough if idea #2 (above) is in playwith recompile
(to force a recompilation on each run) but this may not be enough if (again) idea #2 (above) is in playupdate statistics
(again, preferably after temp table has been populated) should force a recompilation of any follow-on queriesWhile the documentation's recommendation to create the temp table(s) prior to calling the proc may have some benefit, there are a few pitfalls here, too:
Upvotes: 1
Reputation: 213
Does you stored procedure has parameters?
If yes, SQL Server uses a process called parameter sniffing when it executes stored procedures that have parameters. When the procedure is compiled or recompiled, the value passed into the parameter is evaluated and used to create an execution plan. That value is then stored with the execution plan in the plan cache. On subsequent executions, that same value – and same plan – is used.
What happens when the values in a table you’re querying aren’t evenly distributed? What if one value would return 10 rows and another value would return 10,000 rows, or 10 million rows? What will happen is that the first time the procedure is run and the plan is compiled, whatever value is passed in is stored with the plan. Every time it’s executed, until it’s recompiled, the same value and plan will be used – regardless of whether it is the fastest or best plan for that value.
You can force SQL Server to recompile the stored procedure each time it is run. The benefit here is that the best query plan will be created each time it is run. However, recompiling is a CPU-intensive operation. This may not be an ideal solution for stored procedures that are run frequently, or on a server that is constrained by CPU resources already.
ALTER PROCEDURE SP_Test
@ProductID INT
WITH RECOMPILE
AS
SELECT OrderID, OrderQty
FROM SalesOrderDetail
WHERE ProductID = @ProductID
Upvotes: 0
Reputation: 885
Sybase says, "When you create a table in the same stored procedure or batch where it is used, the query optimizer cannot determine how large the table is because the table was not created when the query was optimized. This applies to both temporary tables and regular user tables."
Sybase recommends creating the temp tables outside of the stored proc. Also some solutions if you create indices in certain ways. See Sybase docs for more specifics
Upvotes: 1