Reputation: 41847
I want to delete about 90% of rows from a million+ row table. Anything I can do to make this faster? e.g. if I turn on Simple Recovery Mode will this help?
Upvotes: 5
Views: 1610
Reputation: 754963
Add an IsDeleted (BIT)
flag, set that to 1 for those 90%, and create a view over your table which only shows the rows with IsDeleted=0
.
Deleting is always a) a resource-intensive operation, and b) destroys any possibility of accountability - I would try to avoid it if ever possible, for both reasons (performance and data destruction). Use the "soft delete" approach instead.
Upvotes: 1
Reputation: 65516
Copy the rows your don't want to delete into a temporary table using select into, and then truncate the table rather than delete. The copy the rows back into the old table. Remember to drop contraints first.
If you have identity columns, after renter the data, use something like this to reseed the table.
declare @seed int
select @seed = max(identity_col)+1 from tableName
DBCC CHECKIDENT (orders, table_name, @seed)
Upvotes: 8
Reputation: 62127
If you can not copy the rest to another table, then truncate and copy back...
...dropping indices is about the only thing you can do programming wise. In this case dropping indices and recreating them may help.
...or get a faster disc subsystem, mostly for the data.
Recovery mode will not help - recovery mode says the log is deletable immediately after the (can bbe implicit) commit, but there is nothing in it that avoids the log entries from being writte.
Upvotes: 1
Reputation: 300719
Given that you want to delete 90% of rows, it might be fastest to SELECT INTO those 10% rows you want to keep into a new table.
Upvotes: 2