Reputation: 1790
I am reading records from a Log table. The table currently has 260,000 records with 8 columns. This is the code via EF6.
using (var ctx = new DbEntities())
{
ctx.Configuration.ProxyCreationEnabled = false;
ctx.Configuration.AutoDetectChangesEnabled = false;
ctx.Configuration.LazyLoadingEnabled = false;
Stopwatch sw = Stopwatch.StartNew();
var k = ctx.Logs.Where(l => l.ApplicationId == id && l.Level <= 3 && (l.Dat == 20160514 ||l.Dat == 20160513 ) ).ToList();
sw.Stop();
}
The above take 7-8 seconds to return 2213 rows. Running the equivalent on SSMS tales less than a second.
I have tried also running the query as
ctx.Logs.SqlQuery("SELECT * FROM Log WITH(NOLOCK) WHERE ApplicationId = 4 and Level <= 3 and (Dat = 20160513 or Dat = 20160514)").AsNoTracking().ToList();
This also takes ~7 seconds
I have 2 non-clustered keys one which is just ApplicationId DESC and the other is ApplicationId DESC, Dat DESC, Level ASC.
Activity monitor never shows load on the database.
This seems like a long time for a query of this size. Or are my expectations just skewed?
Can anyone see something that im doing wrong?
Note* Sql sits on an Azure VM with 4 cores and 8gb memory of which it gets 3gb
Note* I am now aware that I should not have called my column Level
EDIT
the following is from the EF6 Log
SELECT
[Extent1].[LogId] AS [LogId],
[Extent1].[ApplicationId] AS [ApplicationId],
[Extent1].[Type] AS [Type],
[Extent1].[Source] AS [Source],
[Extent1].[Message] AS [Message],
[Extent1].[LogTime] AS [LogTime],
[Extent1].[Level] AS [Level],
[Extent1].[Dat] AS [Dat]
FROM [dbo].[Log] AS [Extent1]
WHERE ([Extent1].[ApplicationId] = @p__linq__0) AND ([Extent1].[Level] <= 3) AND ([Extent1].[Dat] IN (20160514,20160513))
-- p__linq__0: '2648' (Type = Int32, IsNullable = false)
-- Executing at 5/14/2016 8:38:38 PM -04:00
-- Completed in 101 ms with result: SqlDataReader
It would seem the query is quite fast. So why so slow to ToList()?
Upvotes: 3
Views: 1974
Reputation: 1795
on sql management studio execute the same query
after the execution click on "show execution plan" button
read what is the problem in the report. if needed execute the fix adding missing index..
Upvotes: 0
Reputation: 1790
After some trial and error I found I can cut the time it takes by at least half.
By first making a Count() call, I can then break the request into multiple pieces and request by page in small increments.
So the same 2213 items now takes 3-4 seconds by making two calls in parallel at only 1000 each. 10,000 items used to take 60 seconds, with this approach its ~18 seconds.
async Task<List<Log>> GeLogs(int count, int pageSize, string search)
{
var pages = Math.Ceiling(count / s);
var tasks = new Task<List<Log>>[(int)pages];
for (int i = 0; i < pages; i++)
{
var page = i;
tasks[i] = Task.Run(() =>
{
using (var c = new DbEntities())
{
c.Configuration.ProxyCreationEnabled = false;
c.Configuration.AutoDetectChangesEnabled = false;
var query = c.Logs.SqlQuery("SELECT * FROM Log WITH(NOLOCK) " + search + string.Format("ORDER BY LogTime DESC OFFSET (({0} - 1) * {1}) ROWS FETCH NEXT {1} ROWS ONLY", page + 1, (int)s));
return query.ToList();
}
});
}
}
Going smaller than 1000 didnt seem to give better results.
Upvotes: 1
Reputation: 1052
EF has to map the result of the SQL into an object, this may involve reflection and the dynamic loading of a mapper to turn the SQL result into a list of logs
. If you're timing a single result, try executing the same thing within the same context a number of times. You will find that the per result execution time will decrease. This is because the mapper is already loaded and doesn't need to be retrieved.
I'm not an expert on EF, I just experimented with the same issue and found that the several seconds delay is only an issue for the first execution against the context.
Upvotes: 2