Alan Araya
Alan Araya

Reputation: 721

SQL Server locks on concurrent table

I have the following scenario:

So we implemented transaction on a lot of commands.And the result was a lot of deadlock situations.

I want some tip about what we could do for avoid those locks. In fact we don't need the transaction, we just need to guarantee that a command will run and if for any reason it fails, the whole operation gets rolled back. I don't know if there's a way to do that without using transactions.

PS: We're using SQL Server 2008 R2.

PS2: I discovered that some system tables I used in the clause FROM on the update was the big problem. Those tables are used for the whole system and gets tons of insert/update/select. So I was just locking things that should not because I didn't change data on that tables with this program.

EX:

   Update t1
   set x= 1
   from systable1 as t
   inner join systable2 t2
   where .....

I guess this was the big problem, so I added hint WITH (NOLOCK) on t and t2 and WITH (ROWLOCK) on t1.

Other thing I must mention, this is a test ambient and we are stressing the data base and program at max, because we just can't risk to fail on production.

Can I use a checkpoint strategy to re-do the action if it fails?

Thanks.

Upvotes: 0

Views: 1403

Answers (4)

Kris Krause
Kris Krause

Reputation: 7326

First, yes you need transactions to ensure success or failure (rollback). Only a 1,000 records? That table must be getting slammed with inserts/updates/deletes! So to me this sounds like a heavy transaction table - so be careful with adding indexes as they will only make your inserts/updates/deletes slower. And I have to confirm there are no triggers on your heavy transaction table, right?

So what about your reads? Ever think about separating out a reporting table or something? Replication might be overkill. How accurate and up-to-the-minute does the data need to be?

Finally - profile, profile, profile.

Upvotes: 1

Steve Stedman
Steve Stedman

Reputation: 2672

Start by performance tuning all of the queries inside the Transaction. Sometimes speeding up a query on the inside of the transaction by adding an index can make a big difference in the amount of deadlocks yous see.

Also, keep the transactions as small as possible to achieve the rollback that you need when something fails. For instance if you had 5 queries that look up data, but only 3 that change data, then you might be able to shrink the transaction down to just the 3 queries changing the data.

Hope this helps.

Upvotes: 0

Tony Hopkinson
Tony Hopkinson

Reputation: 20320

Using readuncommitted is a possible solution though it has knock on effects. I'd try rowlock first though. SQL Server optimises to page lock to reduce the number of locks on a record, as you only have a thousand records unless they are very wide, a page lock will lock a goodly few of them.

Upvotes: 0

BlueMonkMN
BlueMonkMN

Reputation: 25601

You may be able to eliminate locking problems by using READ UNCOMMITTED (see Why use a READ UNCOMMITTED isolation level? and http://msdn.microsoft.com/en-us/library/aa259216(v=sql.80).aspx). But be aware that this could result in reading data that is inconsistent or won't necessarily be persisted in the database.

Upvotes: 0

Related Questions