oradbanj
oradbanj

Reputation: 563

Oracle ORA-04030 even when using Bind Variables in a LOOP

I have to delete almost 500 million rows from a remote table using PL/SQL. Since the UNDO tablespace cannot handle the volume, the deletes are being done in batches size of 1,000,000 and committed. Also - to reduce hard-parsing I am using bind variables using following syntax:

str := 'delete from table@link where id >= :x and id < :y';
execute immediate str using start_id, start_id+1000000

And after every invocation, the start_id is incremented by 1000000 till SQL%rowcount returns 0 ( zero ) and end_id ( which is known ) is reached.

But the process is getting ORA-0430 as follows:

ORA-04030: out of process memory when trying to allocate 16408 bytes (QERHJ hash-joi,QERHJ Bit vector)

ORA-04030: out of process memory when trying to allocate 41888 bytes (kxs-heap-c,temporary memory)

Note that I am already using bind variables so that there is no hard parsing after first execution.

One thing could be the range of ID at target. Assuming first few rows are in increasing order, the IDs are

100,000,000,000
200,000,000,000
50,000,000,000,000,000
50,000,000,000,011,111

On second iteration, IDs from 200,000,000,000 to 200,000,100,000 will be deleted.

But since there are no IDs in this range, it will take almost 50,000,000,000 iterations to get to next row ( 50,000,000,000,000,000 / 1000000 = 50,000,000,000 ).

Of course - I can always examine the ID from target and choose correct range ( that is much larger from default 1 million ).

But that should not be the case for process to run out of memory.

Added Code:

remote.sql : execute on remote :

create table test1
(
    id      number(38) primary key
); 
insert into test1 select level from dual connect by level < 1000000;
insert into test1 values ( 1000000000000 );
insert into test1 values ( 2000000000000 );
commit;
exec dbms_stats.gather_table_stats ( ownname => user, tabname => 'test1',           
     cascade => true, estimate_percent => 100 );
commit; 

local.sql :

create or replace procedure batch_del
as
     l_min_val       integer;
     l_max_val       integer;
     l_cnt           integer;
     l_cnt_dst       integer;
     l_begin         integer;
     l_end           integer;
     l_str           varchar2(1000);
     l_tot_cnt       integer;
  pragma autonomous_transaction;
  begin
      l_tot_cnt := 0;
      l_str := ' select min(id), max(id), count(*) from test1@dst';
      execute immediate l_str into l_min_val, l_max_val, l_cnt_dst;
      dbms_output.put_line ( 'min: ' || l_min_val || ' max: ' || l_max_val
           || ' total : ' || l_cnt_dst );
      l_begin := l_min_val;
      while l_begin  < l_max_val
      loop
           begin
              l_end := l_begin + 100000;
              delete from test1@dst where id >= l_begin and id < l_end;
              l_cnt := SQL%ROWCOUNT;
              dbms_output.put_line ( 'Rows Processed : ' || l_cnt );
              l_tot_cnt := l_tot_cnt + l_cnt;
              dbms_output.put_line ( 'Rows Processed So Far : ' || l_tot_cnt );
              commit;
          exception
          when others then
               dbms_output.put_line    ( 'Error : ' || sqlcode );
     end;
 l_begin := l_begin + 100000;  
 end loop;
 dbms_output.put_line ( 'Total : ' || l_tot_cnt );
end;

**All Local Implementation **

drop table test1;
create table test1
(
 id    number(38) primary key
);
insert into test1 select level from dual connect by level < 1000000;
insert into test1 values ( 1000000000000 );
insert into test1 values ( 2000000000000 );
commit;
exec dbms_stats.gather_table_stats ( ownname => user, tabname => 'test1', 
cascade => true, estimate_percent => 100 );
commit;
create or replace procedure batch_del
as
   l_min_val       integer;
   l_max_val       integer;
   l_cnt           integer;
   l_begin         integer;
   l_tot_cnt       integer;
 pragma autonomous_transaction;
 begin
  l_tot_cnt := 0;
  select min(id), max(id) into l_min_val, l_max_val from test1;
  l_begin := l_min_val;
  while l_begin  < l_max_val
  loop
    begin
     delete from test1 where id >= l_begin and id < l_begin + 10000;
     l_cnt := SQL%ROWCOUNT;
     dbms_output.put_line ( 'Rows Processed : ' || l_cnt );
     l_tot_cnt := l_tot_cnt + l_cnt;
     dbms_output.put_line ( 'Rows Processed So Far : ' || l_tot_cnt );
     commit;
   exception
     when others then
      dbms_output.put_line    ( 'Error : ' || sqlcode );
 end;
 l_begin := l_begin + 10000;
 end loop;
 dbms_output.put_line ( 'Total : ' || l_tot_cnt );
end;  

set timing on;
set serveroutput on size unli;
exec batch_del;

Upvotes: 2

Views: 829

Answers (3)

oradbanj
oradbanj

Reputation: 563

To make @boneist answer more flexible, one can use EXECUTE IMMEDIATE as follows:

loop
.....
  str := 'select min(id) min_id, max(id) max_id l_min_id_array, 
           l_max_id_array from (select id, ceil((row_number() over(order by 
           id)) / l_rows_to_process) grp from test1 t1) group by grp order 
           by grp';

  execute immediate str bulk collect into l_min_id_array, l_max_id_array;
  ....
 end loop;

Upvotes: 0

krokodilko
krokodilko

Reputation: 36127

You are using DMS_Output in your procedure:

dbms_output.put_line ( 'Rows Processed : ' || l_cnt );
....
dbms_output.put_line ( 'Rows Processed So Far : ' || l_tot_cnt );

Each of the above calls produces a string roughly 25 characters long (~25 bytes).

The PUT_LINE prodcedure does not print a message "online" on the console, but rather it places all messages into a memory buffer, please see a note in the documentation: DBMS_OUTPUT

Note:
Messages sent using DBMS_OUTPUT are not actually sent until the sending subprogram or trigger completes. There is no mechanism to flush output during the execution of a procedure.
....
....
Rules and Limits
The maximum line size is 32767 bytes.

The default buffer size is 20000 bytes. The minimum size is 2000 bytes and the maximum is unlimited.

You wrote in the question:

it will take almost 50,000,000,000 iterations

It is very easy to estimate a memory size required for storing DBMS_Output messages,
just: 2 messagess, each 25 bytes, 50,000,000,000 iterations

2 * 25 * 50,000,000,000 = 2 500 000 000 000 bytes

It seems you need about 2 500 Gigabytes (~2,5 terabytes) memory to store all messaged from your procedure. PGA_AGGREGATE_TARGET = 1.5 gB is definitely too low.


Just remove DBMS_Output from your code, no one (any human being) is able to read 50~100 billion mesagess from the console.
If you want to monitor the procedure, use DBMS_APPLICATION_INFO.SET_CLIENT_INFO Procedure, you can store a message up to 64 characters, an then query against V$SESSION view to retrieve last message.

Upvotes: 3

Boneist
Boneist

Reputation: 23588

This is not an answer, but it's too large to fit in a comment, so here we are!

There's no need for the dynamic sql. If I were you, I'd rewrite this as:

create or replace procedure batch_del as
  l_min_val         integer;
  l_max_val         integer;
  l_begin           integer;
  l_end             integer;
  l_rows_to_process number := 100000;
  pragma autonomous_transaction;
begin
  select min(id),
         max(id),
         count(*)
  into   l_min_val,
         l_max_val,
         l_cnt_dst
  from   test1@dst;

  l_begin := l_min_val;

  while l_begin < l_max_val
  loop
    begin
      l_end := l_begin + l_rows_to_process;

      delete from test1@dst
      where  id >= l_begin
      and    id < l_end;

      dbms_output.put_line('rows deleted: '||sql%rowcount);

      commit;
    exception
      when others then
        dbms_output.put_line('error : ' || sqlcode);
    end;

    l_begin := l_begin + l_rows_to_process;

  end loop;
end;
/

Alternatively, if you've got non-consecutive id's, perhaps this would be more performant for you:

create or replace procedure batch_del as
  type type_id_array is table of number index by pls_integer;
  l_min_id_array     type_id_array;
  l_max_id_array     type_id_array;
  l_rows_to_process  number := 10000;
  pragma autonomous_transaction;
begin

  select min(id) min_id,
         max(id) max_id bulk collect
  into   l_min_id_array,
         l_max_id_array
  from   (select --/*+ driving_site(t1) */
                 id,
                 ceil((row_number() over(order by id)) / l_rows_to_process) grp
          from   test1 t1)
  group  by grp
  order  by grp;

  for i in l_min_id_array.first..l_min_id_array.last
  loop
    begin
      delete from test1
      where  id between l_min_id_array(i) and l_max_id_array(i);

      dbms_output.put_line('rows deleted in loop '||i||': '||sql%rowcount);

      commit;
    exception
      when others then
        -- i hope there is some better way of logging an error in your
        -- production db; e.g. a separate procedure writing to a log table.
        dbms_output.put_line('error in loop '||i||': ' || sqlcode);
    end;
  end loop;
end batch_del;
/

Upvotes: 2

Related Questions