WooHoo
WooHoo

Reputation: 1922

Azure function slow executing a stored procedure

I'm using an Azure function like a scheduled job, using the cron timer. At a specific time each morning it calls a stored procedure.

The function is now taking 4 mins to run a stored procedure that takes a few seconds to run in SSMS. This time is increasing despite efforts to successfully improve the speed of the stored procedure.

The function is not doing anything intensive.

using (SqlConnection conn = new SqlConnection(str))
{
    conn.Open();

    using (var cmd = new SqlCommand("Stored Proc Here", conn) { CommandType = CommandType.StoredProcedure, CommandTimeout = 600})
    {
        cmd.Parameters.Add("@Param1", SqlDbType.DateTime2).Value = DateTime.Today.AddDays(-30);
        cmd.Parameters.Add("@Param2", SqlDbType.DateTime2).Value = DateTime.Today;

        var result = cmd.ExecuteNonQuery();
    }
}

I've checked and the database is not under load with another process when the stored procedure is running.

Is there anything I can do to speed up the Azure function? Or any approaches to finding out why it's so slow?

UPDATE.

I don't believe Azure functions is at fault, the issue seems to be with SQL Server.

I eventually ran the production SP and had a look at the execution plan. I noticed that the statistic were way out, for example a join expected the number of returned rows to be 20, but actual figure was closer to 800k.

The solution for my issue was to update the statistic on a specific table each week.

Regarding why that stats were out so much, well the client does a batch update each night and inserts several hundred thousand rows. I can only assume this affected the stats and it's cumulative, so it seems to get worse with time.

Upvotes: 4

Views: 1722

Answers (2)

ΩmegaMan
ΩmegaMan

Reputation: 31576

For a project I worked on we ran into the same thing. Its not a function issue but a sql server issue. For us we were updating sprocs during development and it turns out that per execution plan, sql server will cache certain routes/indexes (layman explanation) and that gets out of sync for the new sproc.

We resolved it by specifying WITH (RECOMPILE) at the end of the sproc and the API call and SSMS had the same timings.

Once the system is settled, that statement can and should be removed.

Search on slow sproc fast ssms etc to find others who have run into this situation.

Upvotes: 1

Conor Cunningham MSFT
Conor Cunningham MSFT

Reputation: 4481

Please be careful adding with recompile hints. Often compilation is far more expensive than execution for a given simple query, meaning that you may not get decent perf for all apps with this approach.

There are different possible reasons for your experience. One common reason for this kind of scenario is that you got different query plans in the app vs ssms paths. This can happen for various reasons (I will summarize below). You can determine if you are getting different plans by using the query store (which records summary data about queries, plans, and runtime stats). Please review a summary of it here: https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017

You need a recent ssms to get the ui, though you can use direct queries from any tds client.

Now for a summary of some possible reasons: One possible reason for plan differences is set options. These are different environment variables for a query such as enabling ansi nulls on or off. Each different setting could change the plan choice and thus perf. Unfortunately the defaults for different language drivers differ (historical artifacts from when each was built - hard to change now without breaking apps). You can review the query store to see if there are different “context settings” (each unique combination of set options is a unique context settings in query store). Each different set implies different possible plans and thus potential perf changes.

The second major reason for plan changes like you explain in your post is parameter sniffing. Depending on the scope of compilation (example: inside a sproc vs as hoc query text) sql will sometimes look at the current parameter value during compilation to infer the frequency of the common value in future executions. Instead of ignoring the value and just using a default frequency, using a specific value can generate a plan that is optimal for a single value (or set of values) but potentially slower for values outside that set. You can see this in the query plan choice in the query store as well btw.

There are other possible reasons for performance differences beyond what I mentioned. Sometimes there are perf differences when running in mars mode vs not in the client. There may be differences in how you call the client drivers that impact perf beyond this.

I hope this gives you a few tools to debug possible reasons for the difference. Good luck!

Upvotes: 2

Related Questions