Victor
Victor

Reputation: 17107

How to run a oracle proc without a transaction log?

In Oracle, When we have a procedure that deletes a lot of data, I guess it updates 'transaction logs'. When we commit, the log is cleared.

But if there is a huge number of delete operations, the log will become full. Is there a way to run the proc without the logging?

Upvotes: 0

Views: 1713

Answers (2)

steve
steve

Reputation: 6020

The transaction log is called the redo log in oracle. If the changes would not be written to the redo log you wouldn't be able to recover. Certain operations like data load or index build can be marked as unrecoverable.

The statement that the log will become full is wrong. The redo log uses a ring of files. Once the last one is filled it starts writing the first one again. If archiving is enabled, the file is copied to an archive destination before overwriting it. What is possible is that the switching of the redo log has to wait for the archiving to complete (or there is no space in the archiving destination). So the right solution is to size the redo logs correctly. This includes the number of redo log groups.

The previous solution to use truncate will not work, as there are several tables involved.

Upvotes: 1

Justin Cave
Justin Cave

Reputation: 231761

What is the problem you are trying to solve? I'm guessing your concern is performance but your say that the problem is that "the log will become full" which may indicate that you're encountering some error in the process.

What fraction of the data in the table is being deleted?

If you are deleting all the data from the table, you could do a TRUNCATE instead.

TRUNCATE TABLE <<table name>>

is a DDL statement that will remove all the data from a table. Since it is DDL and cannot be rolled back, very little REDO and UNDO is generated (Oracle has separate facilities for REDO and UNDO while SQL Server puts both into the same transaction log).

If you are deleting most of the data from the table, it would probably be more efficient to

INSERT /*+ APPEND */ INTO staging_table( column_list )
  SELECT column_list
    FROM table_name
   WHERE <<criteria for the data to save>>

TRUNCATE TABLE table_name;

INSERT /*+ APPEND */ INTO table_name( column_list )
  SELECT column_list
    FROM staging_table

It may be even more efficient to drop the old table and rename the staging table to the old table name but that may involve recreating a number of constraints and indexes.

If you are regularly doing something like archiving older data, another possibility would be to partition the table by some date column and periodically drop the oldest partition. This is also very efficient but it does require the partitioning option which is an extra cost option on top of the enterprise edition license.

Upvotes: 4

Related Questions