Reputation: 7105
Using Entity Framework with SQL Server 2008, we've got an application that writes high volumes of data, say 1000 new rows per minute, each being in their own DBContext.saveChanges call (we're not batching them together)
The issue is that our writes fall way, way behind. To the point that it seems like the thing is thrashing. For example, we'll call saveChanges with new rows a couple thousand times over two minutes, and not a single write will be made, then all of a sudden we'll get a handful of writes (but many are completely lost).
We've taken a SQL trace, and seen that SQL doesn't receive a command to write for even 10% of our saveChanges calls.
So it would seem there's an issue somewhere in between saveChanges and SQL Server. I'm wondering how this call works. Does it use thread pooling? Queueing? some buffer that we could be overrunning? Maybe its silently failing due to the volume of writes?
MSDN is pretty useless on explaining how this stuff actually works
Upvotes: 3
Views: 3000
Reputation: 1985
Read the performance considerations in the msdn and also have a look at Fastest Way of Inserting in Entity Framework.
Upvotes: 2
Reputation: 13286
I don't know how it works internally, but with this kind of overload you better insert the data into a queue and use one or more (but limited) threads to empty the queue and write to the database. You can test and adjust the amount of threads so you won't lose data.
Upvotes: 0