Reputation: 3077
Pseudocode:
var data = ExecuteMSSQLQuery(
"SELECT Id FROM Table WHERE Status='not_processed'");
if(ProcessData(data))
{
ExecuteMSSQLQuery(
"UPDATE Table SET Status='processed' WHERE Status='not_processed'");
}
I want to make sure that the rows that get updated in the UPDATE
query are exactly the same as rows returned by SELECT
. I know that one solution is to use a temp table. But the question in my mind is - can I achieve this by just setting transaction isolation level to serilized
? Or does it only affect SELECT
s? What would be the best solution here?
The database in question is MSSQL 2012 if it is relevant.
Upvotes: 0
Views: 1575
Reputation: 279
There are three inconsistency issues which are possible in multi-user environment:
Dirty Reads : A transaction reads uncommitted (dirty) data from other uncommitted transactions.
Non-Repeatable Reads : Subsequent attempts to read the same data from within the same transaction return different results. This data inconsistency issue arises when the other transactions modified, or even deleted, data between the reads done by the affected transaction.
Phantom Reads : This phenomenon occurs when subsequent reads within the same transaction return new rows (ones that the transaction did not read before). This happens when another transaction inserted the new data in between the reads done by the affected transaction.
This table will show you possible inconsistencies for each transaction level:
+---------------+-------------+----------------------+---------------+
|Isolation Level| Dirty Reads | Non-Repeatable Reads | Phantom Reads |
|Read Uncommited| YES | YES | YES |
|Read Commited | NO | YES | YEs |
|Repeatable Read| NO | NO | YES |
|Serializable | NO | NO | NO |
|Snapshot | NO | NO | NO |
+---------------+-------------+----------------------+---------------+
In your case problem is with non repeatable reads : You find an element which had a certain value during select statement (etc 50), and you want to update it by 10% (55) but in a meantime someone already opened it and changed to 100 , by the time you commit the transaction it will be 110. This can be solved easily solved with REPEATABLE READ, without using High Locking isolation level (Serializable) which blocks you from reading/inserting/updating anything on a table during a transaction, even tough you are just updating one row, or Snapshot which uses tempdb to keep row versions.
Update on your comment
Using SNAPSHOT or SERIALIZABLE will improve your consistency by not updating the rows that were added after your update statement has started. The difference between these two however is that SNAPSHOT will give you optimistic reads, and you can still read the original data, whereas serializable wont, it will simply lock the whole object and keep it until transaction has been completed.
So for example if we had 10 records with ProdutType 1:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRAN
Update Products
SET Qty = 45
where ProductType = 1
and another transaction inserts row with ProductType = 1 , it wont be committed untill first transaction is completed and it will affect 10 rows only.
You could possibly achieve same results without locking using repeatable read :
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
Update Products
SET Qty = 45
where ProductType = 1
We updated 10 rows, and insert another one with ProductType = 1 and Qty 5 - no locking. Committing this would result 10 changed rows and one row of ProductType 10 with Qty 5, however if you execute update statement again during that transaction, after the new row was added it will issue 11 updates
Upvotes: 1