Siick
Siick

Reputation: 511

What is the best way to commit every 200 clients with my ORACLE database request

I have a request in my database which is launched everyday. It's for deleting rows about 750 clients maximum. Now we want to keep that request but we need to make a commit every 200 clients.

Here is the original request :

delete from MYSCHEMA.TABLE1 where id_cli in (select ID_CLI from MYSCHEMA.TMP_ID_CLI_SUPPR);
delete from MYSCHEMA.TABLE2 where id_cli in (select ID_CLI from MYSCHEMA.TMP_ID_CLI_SUPPR);

As you can see the list of client id to delete is in another table. We get it with the select request.

select ID_CLI from MYSCHEMA.TMP_ID_CLI_SUPPR

That list can have 750 client ids maximum. So to commit every 200 clients i've done it that way :

declare i number := 0;
begin
  for r in (select ID_CLI from MKTMLF.TMP_ID_CLI_SUPPR)
  loop
    delete from MYSCHEMA.TABLE1 where id_cli = r.id_cli;
    delete from MYSCHEMA.TABLE2 where id_cli = r.id_cli;
    i := i+1;
    if mod(i, 200) = 0 THEN   
          COMMIT;
    end if;
   end loop;
 commit;
end;

But my coworker tells me it's a bad idea because if we have 750 ids to delete, there will be 750 times these two requests so 750*2 = 1500 requests! He tells me to use to use the ROWNUM to get the first 200 ids, commit then other 200 etc... So i try it and it's look like this :

declare listTotal number := 0;
begin
    select count(1) into listTotal from MKTMLF.TMP_ID_CLI_SUPPR;
    delete from MYSCHEMA.TABLE1 where id_cli in (select ID_CLI from MYSCHEMA.TMP_ID_CLI_SUPPR where ROWNUM < 201);
    delete from MYSCHEMA.TABLE2 where id_cli in (select ID_CLI from MYSCHEMA.TMP_ID_CLI_SUPPR where ROWNUM < 201);
commit;
if listTotal > 200 THEN
    delete from MYSCHEMA.TABLE1 where id_cli in (select ID_CLI from (select tmp.*, rownum r from MKTMLF.TMP_ID_CLI_SUPPR tmp) where r > 200 and r < 401);
    delete from MYSCHEMA.TABLE2 where id_cli in (select ID_CLI from (select tmp.*, rownum r from MKTMLF.TMP_ID_CLI_SUPPR tmp) where r > 200 and r < 401);
end if;
commit;
if listTotal > 400 THEN
    delete from MYSCHEMA.TABLE1 where id_cli in (select ID_CLI from (select tmp.*, rownum r from MKTMLF.TMP_ID_CLI_SUPPR tmp) where r > 400 and r < 601);
    delete from MYSCHEMA.TABLE2 where id_cli in (select ID_CLI from (select tmp.*, rownum r from MKTMLF.TMP_ID_CLI_SUPPR tmp) where r > 400 and r < 601);
end if;
commit;
if listTotal > 600 THEN
    delete from MYSCHEMA.TABLE1 where id_cli in (select ID_CLI from (select tmp.*, rownum r from MKTMLF.TMP_ID_CLI_SUPPR tmp) where r > 600 and r < 751);
    delete from MYSCHEMA.TABLE2 where id_cli in (select ID_CLI from (select tmp.*, rownum r from MKTMLF.TMP_ID_CLI_SUPPR tmp) where r > 600 and r < 751);
end if;
commit;
end;

So i ask myself what's the best way to achieve this? I found the second way a bit too complicated but maybe is faster? Maybe you have another way to do it better?

Upvotes: 1

Views: 101

Answers (1)

Ditto
Ditto

Reputation: 3344

You said you want to commit every 200 ... "Because sometimes it fails before the end"

Based on that information, I'd recommend using the ERROR_LOGGING clause and keep it in a single statement. I don't believe breaking it up to commit every 200 is the best way of handling this situation.

Here's what I would recommend:

1) Create an error table for your two tables:

  EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('TABLE1', 'TABLE1_ERRLOG');
  EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('TABLE2', 'TABLE2_ERRLOG');

(or 1 for each table you are dealing with)

this is a one time setup .. and doesn't have to be redone each day.

2) have your "daily job" run the following delete statements .. including the LOG ERRORS clause:

  delete from MYSCHEMA.TABLE1 where id_cli in (select ID_CLI from MYSCHEMA.TMP_ID_CLI_SUPPR) 
     LOG ERRORS INTO TABLE1_ERRLOG ('Daily Delete1') REJECT LIMIT 750 ;  

You can put any number you want for REJECT LIMIT .. I put 750 for now .. since you mentioned you handle up to 750 per day ? This would allow the script to ATTEMPT to delete everything .. and report all that failed. If you pick a lower number, it will stop after hitting that many failed deletes. Adjust to suit your requirements. ;)

  delete from MYSCHEMA.TABLE2 where id_cli in (select ID_CLI from MYSCHEMA.TMP_ID_CLI_SUPPR)
     LOG ERRORS INTO TABLE2_ERRLOG ('Daily Delete2') REJECT LIMIT 750 ;  

3) Have the job look at those 2 error log tables after running and respond accordingly if records exist ...

you can look into them and handle them as you will ... then re-run the DELETE whenever you wanted to re-try.

(note: deleting a non-existent record is just fine, it wouldn't log any error for that case ... )

Note that this ERROR_LOGGING clause exists for INSERT, UPDATE and DELETE statements. And you only need the 1 ERROR table per base table ... regardless of the INS/UPD/DEL you run ..

in other words ... after creating TABLE1_ERRLOG ... you can then run:

     delete from MYSCHEMA.TABLE1 where id_cli in (select ID_CLI from MYSCHEMA.TMP_ID_CLI_SUPPR) 
     LOG ERRORS INTO TABLE1_ERRLOG ('Daily Delete1') REJECT LIMIT 750 ;  

and

     INSERT into MYSCHEMA.TABLE1 ( select * from ... whatever ..)
        LOG ERRORS INTO TABLE1_ERRLOG ('Daily Insert1') REJECT LIMIT 750 ;  

and also:

     UPDATE MYSCHEMA.TABLE1 set some_col = some_value
        where <some condition>
        LOG ERRORS INTO TABLE1_ERRLOG ('Daily Update1') REJECT LIMIT 750 ;  

And they would all dump the errors into the same Error log table: TABLE1_ERRLOG and you can look at column: ORA_ERR_TAG$ to see if it was Del, Ins or Upd .. (ie the "comment" you send it .. I used "Daily Delete1", "Daily Insert1" and "Daily Update1" in my example above)

Oracle 10: https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_8005.htm

example of LOG ERRORs clause:

https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_9014.htm#BCEGDJDJ

Oracle 11: https://docs.oracle.com/cloud/latest/db112/SQLRF/statements_8005.htm#SQLRF01505

example of LOG ERRORs clause:

https://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm#i2121671

Upvotes: 4

Related Questions