user1140840
user1140840

Reputation: 79

Oracle Table LOGGING Mode - Bulk data insertion

I have a scenario wherein I need to copy 500 million rows from a Table1 to Table2. Couple of points,

  1. Table1 has 2 billion rows.
  2. Table2 is a new table and identical to Table1.
  3. Table1 and Table2 both are of List partition type.
  4. Both tables has to be in same tablespace and tablespace is created with LOGGING mode.
  5. TABLESPACE Block size is: 8192, FORCE_LOGGING NO, AUTO EXTEND ON. REDO ARCHIVAL ENABLED

So, here is what my approach to do this activity and I ask for recommendations to improve or maybe prevent some sudden unwanted situations.

  1. Create Table2 with same structure without any indexes or PK.
  2. Alter Table2 nologging; --Putting the table in NOLOGGING mode to stop redo generation. This is done just to improve performance.
  3. Do this activity in 50 parallel jobs (Jobs created based on partitioned column). Partitioned Column has 120 distinct values. So total 120 jobs. First 50 will be posted and as soon as 1 finishes, 51th will be posted and so on.
  4. Using a Cursor, Bulk Fetch with limit of 5000 and FORALL for insert (With APPEND Hint). Commit immediately after 1 iteration so commit freq is 5000.
  5. After all the jobs are finished, put Table2 back in LOGGING mode. alter table Table2 logging;
  6. Create all required indexes and PK on Table2 with Parallel mode enabled and then alter index NOPARALLEL.

Any suggestions? Thanks a lot for your time.

Upvotes: 0

Views: 3154

Answers (1)

Jon Heller
Jon Heller

Reputation: 36798

Use a single SELECT statement instead of PL/SQL.

There's no need to commit in chunks or to have a parallel strategy that mirrors the partitions. If the APPEND hint works and a direct-path write is used then there won't be any significant REDO or UNDO usage, so you don't need to run in chunks to reduce resource consumption. Oracle can easily divide a segment into granules - it's just copying a bunch of blocks from one place to another, it doesn't matter if it processes them per-partition. (Some possible exceptions are if you're using a weird column that doesn't support parallel SQL, or if you're joining tables and using partition-wise joins.)

alter session enable parallel dml;

alter table table2 nologging;

--Picking a good DOP can be tricky.  32 might not be the best number for you.
insert /*+ append parallel(32) */ into table2
select * from table1;

commit;

alter table table2 logging;

Before you run this, check the execution plan. There are lots of things that can prevent direct-path writes and you want to find them before you start the DML.

In the execution plan, make sure you see "LOAD AS SELECT" to ensure direct-path writes, "PX" to ensure parallelism, and a "PX" operation before the "LOAD AS SELECT" to ensure that both the writes and the reads are done in parallel.

alter session enable parallel dml;

alter table table2 nologging;

explain plan for
insert /*+ append parallel(32) */ into table2
select * from table1;

select * from table(dbms_xplan.display);

I often find it's not worth dealing with indexes separately. But that may depend on the number of indexes.

Upvotes: 2

Related Questions