Sherzodbek
Sherzodbek

Reputation: 202

Insert procedure is working fast at first but slows down after n number of records

We were doing a bulk collect insert into a table in Oracle 19c. The logic is the following

    loop 
       fetch cursor c bulk collect 
        into v_row limit 1000  --v_row is table of source_table%rowtype
        exit when (v_row.count) = 0;
        for i in v_row.first .. v_row.last 
         loop
         --do processing here, assign v_row(i) values to v_target_table%rowtype variable
         --at the end we are extending a nested table and populating it with this v_target_table row
         v_tar_tab.extend;
         v_tar_tab(v_tar_tab.count) := v_target_table;
         end loop;
    end loop;
    --insert with forall
    forall i in v_tar_tab.first .. v_tar_tab.last 
     insert into target_table 
     values v_tar_tab (i);
    v_tar_tab := t_tar_table();  -- is table of target_table%rowtype
    commit;

The problem is in this particular case source table has 300 000 rows, and for the first 100 000 rows insert is working very fast, but after that time for each 1000 fetch is increasing and overall time for the rest 200 000 rows is too big compared to a time spent for the first 100 000 rows. To see this, we added a counter variable and in each fetch increased this counter variable by 1000 and logged iteration number and value of this counter in our log table. After 95th - 100th iteration where 100 000 rows are fetched and processed, process slows down.

There are no commits inside the loop, target table is set to nologging, and its constraints and indexes are disabled before executing this insert procedure. I can't think of any reason why it works fast for the first n number of rows and slow for the rest. Any ideas on what should be changed?

If it is important to note, select statement in cursor c runs in parallel with a hint. I added the APPEND_VALUES hint inside insert into statement, but it didn't change the overall behaviour or time.

Upvotes: 0

Views: 336

Answers (1)

Connor McDonald
Connor McDonald

Reputation: 11616

Look at your session statistics related to undo records. The delta between when a query commenced and when you perform a fetch from it can have an impact on performance because we guarantee to return the records as they were at the moment the query commenced.

If the source table is underdoing transaction activity, then we need to undo those changes as part of the fetch process.

Video showing a demo of that cost here

Upvotes: 1

Related Questions