Reputation: 81
I have to update table which has around 93 mln records, at the beginning DB updated 10 k records per 5 seconds, now after around 60 mln updated records, update next 10k records take 30-60 s, don't know why I have to update columns which are null.
I use loop with commit each 10 k records:
LOOP
UPDATE TABLE
SET DATE_COLUMN = v_hist_date
WHERE DATE_COLUMN IS NULL
AND ROWNUM <= c_commit_limit
AND NOT_REMOVED IS NULL;
EXIT WHEN SQL%ROWCOUNT = 0;
COMMIT;
END LOOP;
Have you any ideas why it slow down so much and how is possible to speed up this update ?
Upvotes: 2
Views: 394
Reputation: 146239
Updates are queries too. You haven't posted an explain plan but given you are filtering on columns which are null it seems probable that your statement is executing a Full Table Scan. That certainly fits the behaviour you describe.
What happens is this. The first loop the FTS finds 10000 rows which fit the WHERE criteria almost immediately. Then you exit the loop and start again. This time the FTS reads the same blocks again, including the ones it updated in the previous iteration before it finds the next 10000 rows it can update. And so on. Each loop takes longer because the full table scan has to read more of the table for each loop.
This is one of the penalties of randomly committing inside a loop. It may be too late for you now, but a better approach would be to track an indexed column such as a primary key. Using such a tracking key will allow an index scan to skip past the rows you have already visited.
Upvotes: 5