Reputation: 721
I have the following scenario:
a table that is accessed (update, delete, insert and select) by multiple programs. In fact they are the same, but instantiated by multiple users. This table never grows to more than 1000 rows as the program deletes data after use and inserts new data again. It's like a Supplier/Collector situation.
as this is a industrial production scenario and I must guarantee some operations, so when a user confirms any action, the program updates that table with data coming from other tables on the system.
So we implemented transaction on a lot of commands.And the result was a lot of deadlock situations.
I want some tip about what we could do for avoid those locks. In fact we don't need the transaction, we just need to guarantee that a command will run and if for any reason it fails, the whole operation gets rolled back. I don't know if there's a way to do that without using transactions.
PS: We're using SQL Server 2008 R2.
PS2: I discovered that some system tables I used in the clause FROM on the update was the big problem. Those tables are used for the whole system and gets tons of insert/update/select. So I was just locking things that should not because I didn't change data on that tables with this program.
EX:
Update t1
set x= 1
from systable1 as t
inner join systable2 t2
where .....
I guess this was the big problem, so I added hint WITH (NOLOCK)
on t and t2 and WITH (ROWLOCK)
on t1.
Other thing I must mention, this is a test ambient and we are stressing the data base and program at max, because we just can't risk to fail on production.
Can I use a checkpoint strategy to re-do the action if it fails?
Thanks.
Upvotes: 0
Views: 1403
Reputation: 7326
First, yes you need transactions to ensure success or failure (rollback). Only a 1,000 records? That table must be getting slammed with inserts/updates/deletes! So to me this sounds like a heavy transaction table - so be careful with adding indexes as they will only make your inserts/updates/deletes slower. And I have to confirm there are no triggers on your heavy transaction table, right?
So what about your reads? Ever think about separating out a reporting table or something? Replication might be overkill. How accurate and up-to-the-minute does the data need to be?
Finally - profile, profile, profile.
Upvotes: 1
Reputation: 2672
Start by performance tuning all of the queries inside the Transaction. Sometimes speeding up a query on the inside of the transaction by adding an index can make a big difference in the amount of deadlocks yous see.
Also, keep the transactions as small as possible to achieve the rollback that you need when something fails. For instance if you had 5 queries that look up data, but only 3 that change data, then you might be able to shrink the transaction down to just the 3 queries changing the data.
Hope this helps.
Upvotes: 0
Reputation: 20320
Using readuncommitted is a possible solution though it has knock on effects. I'd try rowlock first though. SQL Server optimises to page lock to reduce the number of locks on a record, as you only have a thousand records unless they are very wide, a page lock will lock a goodly few of them.
Upvotes: 0
Reputation: 25601
You may be able to eliminate locking problems by using READ UNCOMMITTED (see Why use a READ UNCOMMITTED isolation level? and http://msdn.microsoft.com/en-us/library/aa259216(v=sql.80).aspx). But be aware that this could result in reading data that is inconsistent or won't necessarily be persisted in the database.
Upvotes: 0