Trinadh Sidda
Trinadh Sidda

Reputation: 69

limit to insert rows into a table (Oracle)

In Oracle pl/sql, I have join few tables and insert into another table, which would result in Thousands/Lakhs or it could be in millions. Can insert as

insert into tableA
select * from tableB;

Will there be any chance of failure because of number of rows ? Or is there a better way to insert values in case of more no of records.

Thanks in Advance

Upvotes: 0

Views: 9077

Answers (3)

nitish mishra
nitish mishra

Reputation: 1

To create limited number of rows you can use ROW_NUM which is a pseudo column .

for example to create table with 10,000 rows from another table having 50,000 rows you can use.

insert into new_table_name select * from old_table_name where row_num<10000;

Upvotes: 0

Guido Leenders
Guido Leenders

Reputation: 4262

My recommended steps are different because performance typically increases when you load more data in one SQL statement using SQL or PL/SQL:

  1. I would recommend checking the size of your rollback segment (RBS segment) and possibly bring online a larger dedicated one for such transaction.
  2. For inserts, you can say something like 'rollback consumed' = 'amount of data inserted'. You know the typical row width from the database statistics (see user_tables after analyze table tableB compute statistics for table for all columns for all indexes).
  3. Determine how many rows you can insert per iteration.
  4. Insert this amount of data in big insert and commit.
  5. Repeat.
  6. Locking normally is not an issue with insert, since what does not yet exist can't be locked :-)

When running on partitioned tables, you might want to consider different scenarios allowing the (sub)partitions to distribute the work together. When using SQL*Loader by loading from text files, you might use different approach too, such as direct path which adds preformatted data blocks to the database without the SQL engine instead of letting the RDBMS handle the SQL.

Upvotes: 1

DWilches
DWilches

Reputation: 23016

Well, everything is finite inside the machine, so if that select returns too many rows, it for sure won't work (although there must be maaany rows, the number is dependent on your storage and memory size, your OS, and maybe other things).

If you think your query can surpass the limit, then do the insertion in batches, and commit after each batch. Of course you need to be aware you must do something if at 50% of the inserts you decide you need to cancel the process (as a rollback will not be useful here).

Upvotes: 2

Related Questions