Reputation: 193
I'm starting to develop application that should deal with data access concurrency issues, and I'm having trouble understanding how to properly use transaction isolation levels.
I have the following table called Folders
which contains a tree-like folder structure:
+-----------------------------------------------------------------+
| Id (int) | Name (varchar) | FullPath (varchar) | ParentId (int) |
|----------+----------------+--------------------+----------------|
| 1 | 'root1' | '/root1/' | NULL |
| 2 | 'c1' | '/root1/c1/' | 1 |
| 3 | 'c2' | '/root1/c1/c2/' | 2 |
| 4 | 'root2' | '/root2/' | NULL |
+----------+----------------+--------------------+----------------+
And I'm trying to implement the "Move folder" workflow like this (say, I want to move folder with ID=2 to a new parent with ID=4):
SELECT * FROM Folders WHERE Id=2
SELECT * FROM Folders WHERE Id=4
ParentId
and FullPath
of folder2
: UPDATE Folders SET ParentId=folder4.Id, FullPath=folder4.FullPath+folder2.Name+'/' WHERE Id = folder2.Id
folder2
(call them subfoldersOfFolder2
): SELECT * FROM Folders WHERE FullPath LIKE folder2.FullPath + '%'
subfolder
in subfoldersOfFolder2
update FullPath
column (query omitted)Obviously, I do not want any other transactions to write (or even read) folder2
and subfoldersOfFolder2
until my transaction completes.
After reading this article on SQL Server transactions I got the notion that setting isolation level to Serializable at step #1 would help me achieve this. But for some reason this doesn't seem to happen. I tried leaving the transaction open (stopping before step #7), opening another instance of SSMS and doing SELECT * FROM Folders
, and the query completes successfully, I can still see the data that was read by the 1st transaction.
Why is this happening? How can I prevent anyone else from reading/writing folder2
and subfoldersOfFolder2
? I feel like I'm missing something important about how transactions actually lock data.
Upvotes: 4
Views: 813
Reputation: 2506
Changing serialisation is something I would only do as a last resort. In this case, I'd stick with a single transaction to update the parent ids but put the code to update the folder path in an update trigger.
Upvotes: 0
Reputation: 755237
When you use Serializable
, what this does is it keeps the shared locks (from the SELECT
) on the rows you've read in place until the transaction completes. But a shared lock on a row does not prevent another transaction to read that same row ...... it just stops another transaction from getting an exclusive lock on that row (which the shared lock) for updating or deleting.
If you want to prevent any other transaction from even reading (SELECT
) on those rows, you need to enforce an exclusive lock when you SELECT
:
SELECT *
FROM dbo.Folders WITH (XLOCK)
WHERE ....
Now if this transaction "stays open", no other transaction can read any of the rows that are selected by that WHERE
condition - until that SELECT .. FROM dbo.Folders WITH (XLOCK)
transaction has been committed or rolled back.
Upvotes: 2
Reputation: 5458
If you do the following it does exactly what you want (block readers and writers from seeing rows: In session 1:
create table dbo.test(i1 int, a1 varchar(25))
insert into dbo.test values (1,'NY'),(1,'NY'),(1,'NJ'),(2,'NY'),(2,'NY'),(2,'NJ')
set transaction isolation level serializable
begin transaction
select * from test where i1=1
update dbo.test set i1=3 where a1='NJ'
In session 2 trying
select * from dbo.test where i1=1
hangs....
adding begin try and catch can only improve things but even without it serializable works. Your are not showing us all your code.
Upvotes: 0