Reputation: 173
I have a table (over 1.000.000 rows and 60GB data) with a blob column. I want to move most of the rows (not all rows) of the table to another table. I tried insert into X select from y
command but it is too slow.
What is the fastest way?
I have Oracle 10 or 11.
Upvotes: 1
Views: 17592
Reputation: 309
its too late to suggest something, but it may help with above solutions, if your new(target) table has constraints or indexes or triggers, then try to remove/disable them first, then load your bulk of data and finally create/enable your constraints, indexes and triggers back and analyze your table indexes. this time saving solution is only suggested when you only have to replicate your bulk data once. as while inserting new records in table DBMS ensures the constraints, checks and indexes which reduces the speed
Upvotes: 1
Reputation: 127
use /*+ append */ hint to pass archive log when you use the hint oracle dosent create arcive logs
insert /*+ append */ into TABLE1
select *
from TABLE2
Upvotes: 5
Reputation: 653
Ok, so we don't know your system so it's hard to tell you much. Your question really depends on your environment. Regardless, here are some tests anyways to show the time and resources it takes to use your method versus another method:
Let's say your method is method 1 and the other method is method 2.
20:43:24 SQL> set autotrace on;
20:43:30 SQL> alter session set SQL_TRACE=TRUE;
Session altered.
20:43:39 SQL> --let's make sure we are reading from disk (for arguments sake)
20:43:45 SQL> alter system flush shared_pool;
System altered.
20:43:45 SQL> alter system flush buffer_cache;
System altered.
20:43:45 SQL>
20:43:45 SQL> --clear my examples
20:43:45 SQL> drop table t;
drop table t
*
ERROR at line 1:
ORA-00942: table or view does not exist
20:43:49 SQL> drop table u;
drop table u
*
ERROR at line 1:
ORA-00942: table or view does not exist
20:43:49 SQL>
20:43:49 SQL> --create table u - we will populate this with random numbers
20:43:49 SQL> create table u (y varchar2(4000));
Table created.
20:43:50 SQL>
20:43:50 SQL> --insert 1 million rows of random numbers
20:43:50 SQL> insert into u
20:43:50 2 (select dbms_random.normal
20:43:50 3 from dual
20:43:50 4 CONNECT BY level <= 1000000);
1000000 rows created.
Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | U | | | |
|* 2 | CONNECT BY WITHOUT FILTERING| | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(LEVEL<=1000000)
Statistics
----------------------------------------------------------
4175 recursive calls
58051 db block gets
13118 consistent gets
47 physical reads
54277624 redo size
675 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
56 sorts (memory)
0 sorts (disk)
1000000 rows processed
20:44:21 SQL>
20:44:21 SQL> --create table t - we will populate this from table u
20:44:21 SQL> create table t (x varchar2(4000));
Table created.
20:44:21 SQL>
20:44:21 SQL> --let's make sure we are reading from disk (for arguments sake)
20:44:21 SQL> alter system flush shared_pool;
System altered.
20:44:21 SQL> alter system flush buffer_cache;
System altered.
20:44:26 SQL>
20:44:26 SQL> --insert data from u to t (this is how you said you did this)
20:44:26 SQL> insert into t (select * from u);
1000000 rows created.
Execution Plan
----------------------------------------------------------
Plan hash value: 537870620
---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 997K| 1905M| 1750 (1)| 00:00:21 |
| 1 | LOAD TABLE CONVENTIONAL | T | | | | |
| 2 | TABLE ACCESS FULL | U | 997K| 1905M| 1750 (1)| 00:00:21 |
---------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
5853 recursive calls
58201 db block gets
24213 consistent gets
6551 physical reads
54591764 redo size
681 bytes sent via SQL*Net to client
599 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
57 sorts (memory)
0 sorts (disk)
1000000 rows processed
20:44:41 SQL>
20:44:41 SQL>
20:44:41 SQL> --now let's start over with a different method
20:44:41 SQL> drop table t;
Table dropped.
20:44:48 SQL> drop table u;
Table dropped.
20:44:50 SQL>
20:44:50 SQL> --create table u - we will populate this with random numbers
20:44:50 SQL> create table u (y varchar2(4000));
Table created.
20:44:51 SQL>
20:44:51 SQL> --insert 1 million rows of random numbers
20:44:51 SQL> insert into u
20:44:51 2 (select dbms_random.normal
20:44:51 3 from dual
20:44:51 4 CONNECT BY level <= 1000000);
1000000 rows created.
Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | 2 (0)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | U | | | |
|* 2 | CONNECT BY WITHOUT FILTERING| | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(LEVEL<=1000000)
Statistics
----------------------------------------------------------
2908 recursive calls
58153 db block gets
12831 consistent gets
10 physical reads
54284104 redo size
683 bytes sent via SQL*Net to client
647 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
31 sorts (memory)
0 sorts (disk)
1000000 rows processed
20:45:20 SQL>
20:45:20 SQL> --let's make sure we are reading from disk (for arguments sake)
20:45:20 SQL> alter system flush shared_pool;
System altered.
20:45:20 SQL> alter system flush buffer_cache;
System altered.
20:45:25 SQL>
20:45:25 SQL> --create table t using table u
20:45:25 SQL> create table t as (select * from u);
Table created.
20:45:36 SQL>
20:45:36 SQL> drop table t;
Table dropped.
20:45:41 SQL> drop table u;
Table dropped.
20:45:41 SQL>
20:45:41 SQL> commit;
Commit complete.
20:45:41 SQL> spool off
Ok, so we care about the two methods we tested i.e.
insert into t (select * from u);
which we get an autotrace answer for and
create table t as (select * from u);
which we do not get an autotrace for.
Fortunately I also ran sql_trace and I picked up a TKprof of stats.
This is what I get: for "insert into t (select * from u);":
********************************************************************************
SQL ID: bjdnhkhq8r6h4
Plan Hash: 537870620
insert into t (select * from u)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 2 2 0 0
Execute 1 1.74 7.67 6201 22538 58121 1000000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.74 7.71 6203 22540 58121 1000000
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91
Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD TABLE CONVENTIONAL (cr=0 pr=0 pw=0 time=0 us)
1 TABLE ACCESS FULL U (cr=4 pr=5 pw=0 time=0 us cost=1750 size=1997891896 card=997948)
********************************************************************************
and for "create table t as (select * from u)" we get:
********************************************************************************
SQL ID: asawpwvdj1nbv
Plan Hash: 2321469388
create table t as (select * from u)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 2 2 1 0
Execute 1 0.90 2.68 6372 12823 8573 1000000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.90 2.71 6374 12825 8574 1000000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91
Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD AS SELECT (cr=13400 pr=6382 pw=6370 time=0 us)
1000000 TABLE ACCESS FULL U (cr=12640 pr=6370 pw=0 time=349545 us cost=1750 size=2159012856 card=1078428)
********************************************************************************
So what does this tell us? well: -Method 2 took about 65% less overall time than method 1 (a whole 5 seconds less for 1 million rows) -Method 2 took about 48% less CPU time overall than method 1 -Slightly more disk was parsed with method 2 than method 1 -a lot less buffers were retrieved for method 2 than for method 1
hope this helps you :)
Upvotes: 0