Reputation: 119
I am inserting data into table from xml data and when I have large xml then it is taking time to insert. The problem is when inserting data with in BEGIN TRAN & COMMIT TRAN then other session not being able to read and write data into BrokerData table.
I am using default isolation. Please guide me what changes I should do in database as a result table should not be blocked when one session will insert or update data into that table as a result user from other session can do the DML operation on this BrokerData table.
BEGIN TRAN
INSERT INTO BrokerData
SELECT col.value('(Section/text())[1]', 'NVARCHAR(MAX)') AS Section
,col.value('(LineItem/text())[1]', 'NVARCHAR(MAX)') AS LineItem
,col.value('(XFundCode/text())[1]', 'NVARCHAR(MAX)') AS XFundCode
,col.value('(StandardDate/text())[1]', 'NVARCHAR(MAX)') AS StandardDate
,col.value('(StandardValue/text())[1]', 'VARCHAR(MAX)') AS StandardValue
,col.value('(ActualProvidedByCompany/text())[1]', 'VARCHAR(MAX)') AS ActualProvidedByCompany
FROM @BogyXML.nodes('/Root/PeriodicalData') AS tab (col)
COMMIT TRAN
Upvotes: 1
Views: 751
Reputation: 1811
First, it is good to know default isolation level for MSSQL Server is READ COMMITED
.
READ COMMITTED uses pessimistic locking to protect data. With this isolation level set, a transaction cannot read uncommitted data that is being added or changed by another transaction. A transaction attempting to read data that is currently being changed is blocked until the transaction changing the data releases the lock. A transaction running under this isolation level issues shared locks, but releases row or page locks after reading a row.
So, in your case isolation level is not only problem. Loading XML is heavy operation and best practice is to load this data first in the some staging table.
Staging table can be:
In your case and probably the best option is Memory-Optimized
with two modes of durability SCHEMA_AND_DATA
and SCHEMA_ONLY
. If you want faster mode and if you don't need this data stored on the disk then it is SCHEMA_ONLY
because non-durable memory-optimized tables do not incur logging overhead, transactions writing to them run faster than write operations on durable tables.
The In-Memory OLTP feature built into SQL Server 2016 adds a new memory-optimized relational data management engine and a native stored procedure compiler to the platform that you can use to run transactional workloads with higher concurrency.
There is a little configuration for using this feature of In-Memory OLTP
and if you are not in possition to use this feature temporary table can be replacement, but not good as memory-optimized tables.
Once the data is loaded and stored in the memory of SQL Server then you can start updating your real table which is BrokerData in your case.
However, updating real table is still a problem and I am pretty sure it will be faster if you prepare exactly data in the staging table and then move to the producation one. If you have small number of queries that are accessing to the producation table you can use READ UNCOMMITED
isolation level.
The READ UNCOMMITTED isolation level is the least restrictive setting. It allows a transaction to read data that has not yet been committed by other transactions. SQL Server ignores any locks and reads data from memory. Furthermore, transactions running under this isolation level do not acquire shared (S) locks to prevent other transactions from changing the data being read.
For these reasons, READ UNCOMMITTED
is never a good choice for line of business applications where accuracy matters most, but might be acceptable for a reporting application where the performance benefit outweighs the need for a precise value.
It is really easy to use READ UNCOMMITED
isolation level in the query directy:
SELECT *
FROM BrokerData
WITH (NOLOCK)
If the data is really important and you cannot allow your queries to read dirty data, and some data where is possibility of rollback high then you should think about READ_COMMITTED_SNAPSHOT
. This isolation level is combination of READ COMMITED
and SNAPSHOT
.
The READ_COMMITTED_SNAPSHOT isolation level is an optimistic alternative to READ COMMITTED. Like the SNAPSHOT isolation level, you must first enable it at the database level before setting it for a transaction.
During the transaction, SQL Server copies rows modified by other transactions into a collection of pages in tempdb
known as the version store. When a row is updated multiple times, a copy of each change is in the version store. This set of row versions is called a version chain.
If you are going to use READ_COMMITTED_SNAPSHOT
prepare a lot of memory in the tempdb
.
Upvotes: 1