Reputation: 397
I'm working with a C# .NET 4.0 application, that uses ODP.NET 11.2.0.2.0 with an Oracle 11g database. The application pre-loads a few look-up tables with data, and since most have less than 20 records, the scripts run pretty quickly. However, one of the scripts has 802 records, and takes 248.671 seconds to insert the records, which seems excessive for such a small amount of data with a database that advertises fast operations with large volumes of data.
So I'm wondering, is there a faster way to insert data, via script, than the current way the script is written?
The table being inserted into is defined like so:
CREATE TABLE FileIds
(
Id NUMERIC(38) NOT NULL
,Name NVARCHAR2(50) DEFAULT 'Unknown' NOT NULL
,FileTypeGroupId NUMERIC(38) NOT NULL
,CONSTRAINT FK_FileIds_FileTypeGroups FOREIGN KEY ( FileTypeGroupId ) REFERENCES FileTypeGroups ( Id )
)
And the script to insert looks like the following:
BEGIN
INSERT ALL
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1152,5)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1197,10)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1200,6)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1143,3)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1189,9)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1109,7)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1166,4)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (0,8)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1149,2)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1400,1)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1330,11)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1000,0)
-- 790 Records removed for example purposes.
SELECT * FROM DUAL;
COMMIT;
END;
The FileTypeGroups table, referenced in the Foreign Key, is pre-loaded prior to the loading of the FileIds table. There are no sequences or triggers associated with the FileIds table, and as of yet indexes have not been created for the table.
Upvotes: 4
Views: 5638
Reputation: 36807
Problem
Parsing time may increase exponentially with certain types of statements, especially INSERT ALL
. For example:
--Clear any cached statements, so we can consistently reproduce the problem.
alter system flush shared_pool;
alter session set sql_trace = true;
--100 rows
INSERT ALL
INTO FileIds(Id,FileTypeGroupId) VALUES(1, 1)
...
repeat 100 times
...
select * from dual;
--500 rows
INSERT ALL
INTO FileIds(Id,FileTypeGroupId) VALUES(1, 1)
...
repeat 500 times
...
select * from dual;
alter session set sql_trace = false;
Run the trace file through tkprof, and you can see the Parse time increases dramatically for a large number of rows. For example:
100 rows:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.05 0 1 0 0
Execute 1 0.00 0.00 0 100 303 100
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.06 0.05 0 101 303 100
500 rows:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 14.72 14.55 0 0 0 0
Execute 1 0.01 0.02 0 502 1518 500
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 14.74 14.58 0 502 1518 500
Solutions
insert into ... select ... from dual union all ...
method instead. It usually runs much faster, although it's parsing performance may also degrade significantly with size.Warning
Don't learn the wrong lesson from this. If you're worried about SQL performance, 99% of the time you're better off grouping similar things together instead of splitting them apart. You're doing things the right way, you just ran into a weird bug. (I searched My Oracle Support but couldn't find an official bug for this.)
Upvotes: 5