Reputation: 3982
I have many .NET processes reading messages from an SQL Server 2008 DB table and processing them one at a time. I implement a simple SP to 'lock' the row that is being read by any one process, to avoid any two processes processing the same row.
BEGIN TRAN
SELECT @status = status FROM t WHERE t.id = @id
IF @status = 'L'
BEGIN
-- Already locked by another process. Skip this
RETURN 0
END
ELSE
UPDATE t SET status = 'L' WHERE id = @id
RETURN 1
END
COMMIT
However, this is flawed: sometimes a row gets 'locked' and processed twice. I suspect there is a concurrency problem: two processes reading the status before one updates it.
I think this could be resolved by implementing a read block somehow (i.e make the transaction block READs), but I am unsure how to do this. Can anyone help?
Thanks a lot in advance
Ryan
Upvotes: 5
Views: 112
Reputation: 55444
Did you try using:
SELECT @status = status FROM t (UPDLOCK) WHERE t.id = @id
Refer to this link for more details.
The thing you're missing is that a SELECT
statement doesn't normally lock the row, so if one process has executed the SELECT
but hasn't execute the UPDATE
yet, but then another process comes along and executes the SELECT, then it's going to return the row back since you haven't locked it.
By using the UPDLOCK
, you lock that row with your SELECT statement and prevent the other process from getting it back until the first process commits the transaction, which is the behavior you want.
EDIT Of course, doing it with one statement as Martin suggests is the easiest way and you avoid having to deal with the locking issue at all.
Upvotes: 2
Reputation: 453278
Why not just replace the whole thing with
UPDATE t SET status = 'L' WHERE id = @id and status <> 'L'
RETURN @@ROWCOUNT
This will avoid 2 table accesses and holding locks open for any longer than necessary.
Upvotes: 3