Reputation: 23098
SQL Server 2014
introduced what it seems to be a great feature: In-Memory OLTP (In-Memory Optimization). It reminds me of MySQL's
MEMORY Storage Engine, which I have tried several years ago and obtained greater performance.
I know it has been already asked, but I am interested in a more targeted scenario:
Several SSRS
reports use large stored procedures that, for performance reasons (really ugly and slow execution plans for queries with lots of joins, grouping etc.), split the logic by using temporary tables to hold partial results.
Since temporary table are written to the disk, I expect to have a performance impact when dealing with large quantity of data (tens or even hundred of thousand of lines).
Does using In-Memory OLTP tables
make sense in this scenario? I am thinking about the following pros and cons.
Pros:
Cons:
SPID
s)Upvotes: 3
Views: 622
Reputation: 171246
You got that right.
Note, though, that temp tables like any other table are written lazily (even more lazily than normal). The writing to disk is usually not on the critical path because it happens asynchronously. Except for very big temp tables that exhaust SQL Servers ability to keep dirty buffers in memory.
Also, if the temp table is deallocated quickly there will not be any page writes at all.
So the "pro" that you have listed there often does not come into (full) effect.
That said in-memory tables tend to use less CPU even in interop mode. So the "pro" is always at least a little bit true.
You can consider using schema-only Hekaton tables. That will not cause any data writes for sure.
Upvotes: 2