Reputation: 127
I'm working on oracle 9i. I have a table with 135,000,000 records, partitioned where each partition having approx. 10,000,000 rows. all indexed and everything.
I need to delete around 70,000,000 rows from this as the new business requirement.
So I created a backup of the rows to be deleted as separate table.
Table1 <col1, col2........> -- main table (135,000,000 rows)
Table2 <col1, col2........> -- backup table (70,000,000 rows)
Tried the below delete query.
Delete from table1 t1 where exists (select 1 from table2 t2 where t2.col1 = t1.col1)
but it takes infinite hours.
then tried
declare
cursor c1 is
select col1 from table2;
c2 c1%rowtype;
cnt number;
begin
cnt :=0;
open c1;
loop
fetch c1 into c2;
exit when c1%notfound;
delete from table1 t1 where t1.col1 = c2.col1;
if cnt >= 100000 then
commit;
end if;
cnt:=cnt+1;
end loop;
close c1;
end;
even still its been running for more than 12 hours. and still not completed.
Please note that there are multiple indexes on table1 and an index on col1 on table2. all the tables and indexes are analysed.
Please advise if there is any way of optimizing for this scenario.
Thanks guys.
Upvotes: 0
Views: 8712
Reputation: 231661
You say that the table is partitioned. Is your intention to drop all the data in certain partitions? If so, you should be able to simply drop the 7 partitions that have the 70 million rows that you want to drop. I'm assuming, however, that your problem isn't that simple.
If you can do interim commits, that implies that you don't care about transactional consistency, the most efficient approach is likely something along the lines of
CREATE TABLE rows_to_save
AS SELECT *
FROM table1
WHERE <<criteria to select the 65 million rows you want to keep>>
TRUNCATE TABLE table1;
INSERT /*+ append */
INTO table1
SELECT *
FROM rows_to_save;
Barring that, rather than creating the backup table, it would be more efficient to simply issue the DELETE
statement
DELETE FROM table1
WHERE <<criteria to select the 70 million rows you want to keep>>
You may also benefit from dropping or disabling the indexes and constraints before running the DELETE
.
Upvotes: 1
Reputation: 203
I'm going to answer this assuming that it is cheaper to filter against the backup table, but it would probably be cheaper to just use the negation of the criteria you used to populate the backup table.
1) create a new table with the same structure. No indexes, constraints, or triggers.
2)
select 'insert /*+ append nologging */ into new_table partition (' || n.partition_name || ') select * from old_table partition (' || o.partition_name || ') minus select * from bak_table partition (' || b.partition_name || ');' from all_tab_partitions o, all_tab_partitions n, all_tab_partitions b where o.partition_no = all( n.partition_no, b.partition_no) and o.table_name = 'OLD_TABLE' and o.table_owner = 'OWNER' and n.table_name = 'NEW_TABLE' and n.table_owner = 'OWNER' and b.table_name = 'BAK_TABLE' and b.table_owner = 'OWNER'; -- note, I haven't run this it may need minor corrections in addition to the obvious substitutions
3) verify and the run the result of the previous query
4) build the indexes, constraints, and triggers if needed
This avoids massive amounts of redo and undo compared to the delete. append hint for direct path inserts no logging to further reduce redo - make sure you backup afterwards takes advantage of your partitioning to break the work into chunks that can be sorted in less passes
You could probably go faster with parallel insert + parallel select, but it is probably not necessary. Just don't do a parallel select without the insert and an "alter session enable parallel dml"
Upvotes: 0
Reputation: 3152
I remember facing this issue earlier. In that case, we resorted to doing this since it worked out faster than any other delete operation:
1) Create another table with identical structure
2) Insert into the new table the records you want to keep (use Direct path insert to speed this up)
3) Drop the old table
4) Rename the new table
Upvotes: 4
Reputation: 9319
Drop all indexes (backup the create statements) Use the select statement that used to build the backup table, create from it a DELETE command Recreate all index
Upvotes: 4