Reputation: 5221
I have a SQL server with 16130000 rows. I need to delete around 20%. When I do a simple:
delete from items where jobid=12
Takes forever.
I stopped the query after 11 minutes. Selecting data is pretty fast why is delete so slow? Selecting 850000 rows takes around 30 seconds.
Is it because of table locks? And can you do anything about it? I would expect delete rows should be faster because you dont transfer data on the wire?
Best R, Thomas
Upvotes: 0
Views: 209
Reputation: 15648
Try to use batching techniques to improve performance, minimize log usage and avoid consuming database space.
declare
@batch_size int,
@del_rowcount int = 1
set @batch_size = 100
set nocount on;
while @del_rowcount > 0
begin
begin tran
delete top (@batch_size)
from dbo.LargeDeleteTest
set @del_rowcount = @@rowcount
print 'Delete row count: ' + cast(@del_rowcount as nvarchar(32))
commit tran
end
Drop any foreign keys, delete the rows and then recreate the foreign keys can speed up things also.
Upvotes: 0
Reputation: 4491
Without telling us what reservation size you are using, it is hard to give feedback on whether X records in Y seconds is expected or not. I can tell you about how the system works so that you can make this determination with a bit more investigation by yourself, however. The log commit rate is limited by the reservation size you purchase. Deletes are fundamentally limited on the ability to write out log records (and replicate them to multiple machines in case your main machine dies). When you select records, you don't have to go over the network to N machines and you may not even need to go to the local disk if the records are preserved in memory, so selects are generally expected to be faster than inserts/updates/deletes because of the need to harden log for you.
You can read about the specific limits for different reservation sizes are here: DTU Limits and vCore Limits
One common problem customers hit is to do individual operations in a loop (like a cursor or driven from the client). This implies that each statement has a single row updated and thus has to harden each log record serially because the app has to wait for the statement to return before submitting the next statement. You are not hitting that since you are running a big delete as a single statement. That could be slow for other reasons such as:
Upvotes: 1