R.Kaka
R.Kaka

Reputation: 167

The impacts of increase column's size in Oracle

I've a script

SQL script: alter table xx_vms.es_lot_updates modify (routing_list varchar2(1000));

The previous size of this column is 255. Now I increase its size to 1000. But in this table, the routing_list column often is NULL. Occasionally, it has value.

Does the increment size of this column make performance issues? I'm very confused! Please help. Thanks.

Upvotes: 1

Views: 6320

Answers (3)

abhi
abhi

Reputation: 621

No it has not much to do with performance , the varchar2 maximum size is 4000.

once you alter the table do remeber to compile all dependencies ,as they will be invalid for example

procedures and packages

Upvotes: 0

Kirill Leontev
Kirill Leontev

Reputation: 10931

It depends. Tables with varchar2(1000) use greater amount of database blocks for storage, so full table scans are probably going to be more expensive as you're going to read more blocks. Here's a simple test for this:

17:48:25 HR@sandbox> create table c1000 (txt varchar2(1000));

Table created.

Elapsed: 00:00:00.21
17:48:37 HR@sandbox> create table c255 (txt varchar2(255));

Table created.

Elapsed: 00:00:00.03

-- for this test we assume that only 10% of columns are filled with data
17:50:21 HR@sandbox> ed
Wrote file S:\spool\sandbox\BUFFER_HR_32.sql

  1   insert into c255
  2   select decode(mod(rownum,10), 0, lpad('x', 255,'x')) from dual
  3*  connect by rownum <= 1e5
17:50:24 HR@sandbox> /

100000 rows created.

Elapsed: 00:00:01.26
17:50:47 HR@sandbox> ed
Wrote file S:\spool\sandbox\BUFFER_HR_32.sql

  1   insert into c1000
  2   select decode(mod(rownum,10), 0, lpad('x', 1000,'x')) from dual
  3*  connect by rownum <= 1e5
17:50:51 HR@sandbox> /

100000 rows created.

Elapsed: 00:00:01.45
17:50:52 HR@sandbox> commit;

Commit complete.

Elapsed: 00:00:00.01
17:51:20 HR@sandbox> select table_name, blocks from user_tables where table_name in ('C255', 'C1000');

TABLE_NAME                         BLOCKS
------------------------------ ----------
C1000                                1756
C255                                  622

Elapsed: 00:00:00.20
17:53:28 HR@sandbox> set autotrace traceonly statistics
17:53:37 HR@sandbox> select * from c1000;

100000 rows selected.

Elapsed: 00:00:01.30

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       1901  consistent gets
       1694  physical reads
          0  redo size
   10586458  bytes sent via SQL*Net to client
       2552  bytes received via SQL*Net from client
        201  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed

17:53:50 HR@sandbox> select * from c255;

100000 rows selected.

Elapsed: 00:00:00.63

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        825  consistent gets
        295  physical reads
          0  redo size
    3107206  bytes sent via SQL*Net to client
       2552  bytes received via SQL*Net from client
        201  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     100000  rows processed

Look at consistent gets stat. Full scan of c1000 peforms twice as much IOs compared to full scan of c255.

However, in case you select only a few rows using primary key, I don't think you'll notice a significant difference.

Upvotes: 2

winkbrace
winkbrace

Reputation: 2711

No, there is no performance issue. All you have done is allow bigger text strings to be inserted in that column. Keeping in mind that you should always try to be as strict as possible to what you want to allow in your tables, you are free to make the columns as big as you need. (max 4000 bytes for varchar2)

Upvotes: 4

Related Questions