user3564942
user3564942

Reputation: 21

SQL Server Delete Lock issue

I have a SQL Server database where I am deleting rows from three tables A,B,C in batches with some conditions through a SQL script scheduled in a SQL job. The job runs for 2 hours as the tables have a large amount of data. While the job is running, my front end application is not accessible (giving timeout error) since the application inserts and updates data in these same tables A,B,C.

Is it possible for the front end application to run in parallel without any issues while the SQL script is running? I have checked for the locks on the table and SQL Server is acquiring page locks. Can Read Committed Snapshot or Snapshot isolation levels or converting page locks to row locks help here. Need advice.

Upvotes: 2

Views: 7136

Answers (2)

dean
dean

Reputation: 10098

I suspect that SQL Server at some point escalates to table lock, and this means that the table is inaccessible, both for reading and updating.

To optimize locking and concurrency when dealing with large deletes, use batches. Start with 5000 rows at the time (to prevent lock escalation) and monitor how it behaves and whether it needs further tuning up or down. 5000 is a "magic number", but it's low enough number that lock manager doesn't consider escalating to table lock, and large enough for the performance.

Whether timeouts will happen or not depends on other factors as well, but this will surely reduce if not elliminate alltogether. If the timeout happen on read operations, you should be able to get rid of them. Another approach, of course, is to increase the command timeout value on client.

Snapshot (optimistic) isolation is an option as well, READ COMMITTED SNAPSHOT more precisely, but it won't help with updates from other sessions. Also, beware of version store (in tempdb) growth. Best if you combine it with the proposed batch approach to keep the transactions small.

Also, switch to bulk-logged recovery for the duration of delete if the database is in full recovery normally. But switch back as soon as it finishes, and make a backup.

Almost forgot -- if it's Enterprise edition of SQL Server, partition your table; then you can just switch the partition out, it's almost momentarilly and the clients will never notice it.

Upvotes: 0

Andomar
Andomar

Reputation: 238296

Split the operation in two phases. In the first phase, collect the primary keys of rows to delete:

create table #TempList (ID int);

insert  #TempList
select  ID
from    YourTable

In the second phase, use a loop to delete those rows in small batches:

while 1=1
    begin
    delete  top (1000)
    from    YourTable
    where   ID in (select ID from #TempList)

    if @@rowcount = 0
        break
    end

The smaller batches will allow your front end applications to continue in between them.

Upvotes: 2

Related Questions