Reputation: 725
I have a table of about 3 million rows in a SQL Server 2012 database. The ETL package inserts about 50,000 new rows to the table.
Right after the insert, the ETL package updates the newly added rows using a stored procedure. The procedure has an input parameter for the ExecutionID
to filter only the rows which were inserted. I have created the indexes based on the query and if the statistics for the table are up to date then the query performance is great. However, the stats are not updated every day since the number of added records don't force auto update stats to fire.
My understanding of stored procedure plans was that the optimizer caches the plan and then reuses it until the plan gets invalid. Because of that I was expecting that if I run the procedure with plan I want once, the plan should get reused. However, I see multiple plans in the plan cache for the same procedure.
Why would that happen? In other words, why would SQL Server cache different plans for the same query?
Should I in this case update the stats manually before calling the procedure? It seems that the optimizer uses the parameter value to determine the plan and since the stats were not updated for the new batch of data, it won't use the proper plan.
Upvotes: 0
Views: 1709
Reputation: 5697
If your ExecutionID
is auto-incrementing then what is most likely happening is that new values are not featuring in the various histograms and stats (because it's new). As a result, the optimiser knows nothing about the cardinality of that number and makes a poor guess.
I'd suggest you want to
or you can freeze a plan
Alternatively you can put an index hint on the update so that it uses the correct plan no matter what.
Upvotes: 1