Reputation: 23
How a call to SqlCommand.ExecuteReader take less time to complete than the SQL batch itself as seen in SQL Profiler?
I have the following simple code running in a console app which calls SqlCommand.ExecuteReader, which I time using a Stopwatch object:
var swQueryTime = new Stopwatch();
var conn = new SqlConnection("Data Source=.;Initial Catalog=master;Trusted_Connection=True;");
conn.Open();
string sql = string.Format(@"select * from sys.dm_os_memory_clerks; select * from sys.dm_os_performance_counters ");
for (int i = 0; i < 10; i++)
{
var comm = new SqlCommand(sql, conn);
swQueryTime.Restart();
var dr = comm.ExecuteReader();
swQueryTime.Stop();
Console.WriteLine("ElapsedMilliseconds: {0}", swQueryTime.ElapsedMilliseconds);
dr.Close();
comm = null;
}
On average the SQL batch duration is 4 times longer than what is reported on the .Net side.
I have checked profiler is specifically reporting milliseconds.
I am not using the async version of SqlCommand.ExecuteReader.
Profiler duration is not a sum of all time across multiple threads/cores from what I have read and verified using profiler start and end times.
Ideas appreciated.
Upvotes: 2
Views: 603
Reputation: 1063864
Because you have only got timings to the very start of the batch. If you consume the data too, the times will probably line up:
using(var dr = comm.ExecuteReader()) {
do {
while(dr.Read()) {}
} while (dr.NextResult());
}
Incidentally, there are also a few SET
options that can change the performance of some queries - not sure that should apply here, but it can significantly impact tables that have calculated+persisted values: if the SET
values aren't compatible with the settings when the column was created, it will need to re-run the calculation per row. This is especially noticeable for calculated+persisted+indexed columns if you are filtering on that column - it will have to do a table-scan (or similar) rather than an index scan / index seek.
Upvotes: 2