Reputation: 79
I have a scenario wherein I need to copy 500 million rows from a Table1 to Table2. Couple of points,
So, here is what my approach to do this activity and I ask for recommendations to improve or maybe prevent some sudden unwanted situations.
Any suggestions? Thanks a lot for your time.
Upvotes: 0
Views: 3154
Reputation: 36798
Use a single SELECT
statement instead of PL/SQL.
There's no need to commit in chunks or to have a parallel strategy that mirrors the partitions. If the APPEND
hint works and a direct-path write is used then there won't be any significant REDO or UNDO usage, so you don't need to run in chunks to reduce resource consumption. Oracle can easily divide a segment into granules - it's just copying a bunch of blocks from one place to another, it doesn't matter if it processes them per-partition. (Some possible exceptions are if you're using a weird column that doesn't support parallel SQL, or if you're joining tables and using partition-wise joins.)
alter session enable parallel dml;
alter table table2 nologging;
--Picking a good DOP can be tricky. 32 might not be the best number for you.
insert /*+ append parallel(32) */ into table2
select * from table1;
commit;
alter table table2 logging;
Before you run this, check the execution plan. There are lots of things that can prevent direct-path writes and you want to find them before you start the DML.
In the execution plan, make sure you see "LOAD AS SELECT" to ensure direct-path writes, "PX" to ensure parallelism, and a "PX" operation before the "LOAD AS SELECT" to ensure that both the writes and the reads are done in parallel.
alter session enable parallel dml;
alter table table2 nologging;
explain plan for
insert /*+ append parallel(32) */ into table2
select * from table1;
select * from table(dbms_xplan.display);
I often find it's not worth dealing with indexes separately. But that may depend on the number of indexes.
Upvotes: 2