Reputation: 905
Code, notice the order of the values is different. So it alternates between locking rows:
static void Main( string[] args )
{
List<int> list = new List<int>();
for(int i = 0; i < 1000; i++ )
list.Add( i );
Parallel.ForEach( list, i =>
{
using( NamePressDataContext db = new NamePressDataContext() )
{
db.ExecuteCommand( @"update EBayDescriptionsCategories set CategoryId = Ids.CategoryId from EBayDescriptionsCategories
join (values (7276, 20870),(240, 20870)) as Ids(Id,CategoryId) on Ids.Id = EBayDescriptionsCategories.Id" );
db.ExecuteCommand( @"update EBayDescriptionsCategories set CategoryId = Ids.CategoryId from EBayDescriptionsCategories
join (values (240, 20870),(7276, 20870)) as Ids(Id,CategoryId) on Ids.Id = EBayDescriptionsCategories.Id" );
}
} );
}
Table def:
CREATE TABLE [dbo].[EDescriptionsCategories](
[CategoryId] [int] NOT NULL,
[Id] [int] NOT NULL,
CONSTRAINT [PK_EDescriptionsCategories] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
Exception:
Transaction (Process ID 80) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
The code works only with WITH (TABLOCK) hint. Is it possible not to lock the whole table just to update just those 2 rows in parallel?
Upvotes: 2
Views: 620
Reputation: 29796
Agree with other answers as regards to the locking.
The more pressing question is what are you hoping to gain from this? There's only one cable those commands are travelling down.
You are probably making the overall performance worse by doing this. Far better to do your computation in parallel but serialize (and possibly batch) your updates.
Upvotes: 0
Reputation: 171226
Your two statements acquire row locks in different order. That's a classic case for deadlocks. You can fix this by ensuring that the order of locks taken is always in some global order (for example, ordered by ID). You should probably coalesce the two UPDATE
statements into one and sort the list of IDs on the client before sending it to SQL Server. For many typical UPDATE
plans this actually works fine (not guaranteed, though).
Or, you add retry logic in case you detect a deadlock (SqlException.Number == 1205
). This is more elegant because it requires no deeper code changes. But deadlocks have performance implications so only do this for low deadlock rates.
If your parallel processing generates lots of updates, you could INSERT
all those updates into a temp table (which can be done concurrently) and when you are done you execute one big UPDATE
that copies all the individual update records to the main table. You just change the join source in your sample queries.
Upvotes: 1
Reputation: 294407
Code, notice the order of the values is different. So it alternates between locking rows
No, it doesn't alternate. It acquires the locks in two different order. Deadlock is guaranteed.
Is it possible not to ... update just those 2 rows in parallel?
Not like that it isn't. What you're asking for is the definition of a deadlock. Something gotta give. The solution must come from your business logic, there should be no attempts to process the same set of IDs from distinct transactions. What that means, is entire business specific. IF you cannot achieve that, then basically you are just begging for deadlocks. There are some things you can do, but none is bulletproof and all come at great cost. The problem is higher up the chain.
Upvotes: 0