John Bailey
John Bailey

Reputation: 11

Building bulk update query

I am currently working on Stored procedure where it is utilizing cursor within cursor for inserting and update table between multi-source tables and target table.

Insert Load time for the target table based on cursor data was running for an hour as it was firing insert for every record within the cursor.

I created another sub procedure to handle this use case by using this insert format. Suppose I have 10 records fetched for cursor, bulk insert statement would look like this,

execute immediate 'insert into target_table ( col1, col2, col3, col4, col5) values 
('100','101','102','abc','def'),
('200','201','202','abc2','def2')
.
.
.
.
('1000','1001','1002','1003','def10')'; --tenth record 

The execution time has greatly come down to minutes ~ 5 minutes.

Say similarly I have 10 records fetched from cursor and I want to update ten records at target table, if I write an update statement within the cursor it would fire update for all the records individually.

How can I handle the update operation to do a single time bulk update (similar to insert operation I shared above) rather than single updates ?

Example:

CUR_C1 CURSOR FOR 
select col1, col2, col3, col4, col5 from source_table1;

OPEN CUR_C1 
FOR CUR_REC in CUR_C1 
DO
v1 := col1;
v2 := col2;
v3 := col3;
v4 := col4;
v5 := col5;

UPDATE target_table
set 
col2 = :v2,
col3 = :v3,
col4 = :v4,
col5 = :v5
where
col1 = :v1;

END CUR_REC ;

CLOSE CUR_C1 ;

Upvotes: 0

Views: 218

Answers (1)

Littlefoot
Littlefoot

Reputation: 142753

Row-by-row processing is - as you found out - slow. If you have nested loops, things can only get worse, not better. "Bulk processing" you mentioned would be - if possible - to entirely skip loops and cursors and do everything in a single update (or - possibly better - merge statement).

Code you posted (the anonymous PL/SQL block) is invalid; there's no do keyword there. By the way, are you really using bind variables (their names are preceded by colon : sign)?

Therefore: if there's no additional calculation or data processing within the loop, try to compose a single query which represents source data. Join appropriate columns from that query to target table's columns, and merge the result.

This is just an example of how it might look like:

merge into target_table a
  using 
  -- that "large" single query begins here ...
  (select a.id, a.name, b.address, b.phone, c.country_name
   from users a join addresses b on a.user_id = b.user_id
                join countries c on c.country_id = b.country_id
  ) x
  -- ... and ends here
  on (a.id = x.id)
  when matched then update set
    a.name         = x.name,
    a.address      = x.address,
    a.phone        = x.phone,
    a.country_name = x.country_name; 

merge can also contain a when not matched clause which lets you insert rows contained in x subquery, which are missing in target_table.

Such a set-oriented processing should be way faster than your loop-in-a-loop / row-by-row option.

Upvotes: 1

Related Questions