Reputation: 67
I have 2 cursors , one to fetch records from a table of 50 columns and 10,000 + data and another to check if a particular column exists in another big table (2 million data). I should write to a file all the records from cursor 1 for a year , if that column exists in cursor 2 then i should print an error message as exists and not delete them . If it does not exist then i should delete the row and write it to the same file and message as record deleted. I used a nested cursor , the performance is too bad as it is processing each row from cursor 1 against cursor 2 , every time .
CURSOR cursor1
IS
select a.* ,a.rowid
FROM table1 a
WHERE a.table1.year = p_year;
CURSOR check_c2(lv_cd )
IS
Select DISTINCT 'Y'
from table2
where table2 ='R'
AND table2.year= p_year
and table2_code= lv_cd ;
BEGIN :
FOR r in cursor1 LOOP
EXIT WHEN cursor1%NOTFOUND;
OPEN check_c2(r.cd);
FETCH check_c2 INTO lv_check;
IF check_c2%NOTFOUND THEN
lv_check :='N';
END IF;
CLOSE check_c2;
IF lv_check ='Y' THEN
lv_msg =(r.col1,r.col2....r.col50, R code exists do not delete)
utl_file.put_line(lv_log_file, lv_msg, autoflush=>TRUE);
ELSE
DELETE from table1 where rowid= r.rowid
lv_msg =(r.col1,r.col2....r.col50, delete row)
utl_file.put_line(lv_log_file, lv_msg, autoflush=>TRUE);
END IF;
END LOOP;
Upvotes: 0
Views: 521
Reputation: 553
Don't have enough reputation to write comments som will write as an answer.
Didn't you try to add some time marks to understand which parts are the most time spending?
Does table2 have index by year and code? What's the explain plan of cursor2 query? If yes - how many rows are there average for year+code combination? If the amount of data selected overall from table 2 is huge - then it probably can be faster to do a single query with full scan/index range scan by year on table2, grouping and hash left outer join from table1 to table2 like
select a.*, a.rowid, nvl2(c.code, 'Y', 'N') check_col
from table1 a,
(
select distinct code
from table2 b
where b.year = p_year
) c
where a.year = p_year
and c.code(+) = a.cd
Upvotes: 1
Reputation: 1552
utl_file.put_line in the loop will be an overhead.Try appending to lv_msg till length of the string is 32767 bytes and write just once. This will definitely reduce the I/O and performance should be improved.
Upvotes: 0
Reputation: 142713
How about this? A 3-steps-operation:
Step 1: "save" rows you'll later delete
create table log_table as
select *
from table1 a
where exists (select null
from table2 b
where b.year = a.year
and b.code = a.code
);
Step 2: delete rows:
delete from table1 a
where exists (select null
from table2 b
where b.year = a.year
and b.code = a.code
);
Step 3: if you must, store rows saved in the LOG_TABLE into that file of yours. If not, leave them in LOG_TABLE
.
Upvotes: 0