Reputation: 10648
Or should I ?
(The title is inspired by Gary Myers' comment in Why does Oracle varchar2 have a mandatory size as a definition parameter?)
Consider the following variables:
declare
-- database table column interfacing variable
v_a tablex.a%type; -- tablex.a is varchar2
-- PL/SQL only variable
v_b varchar2(32767); -- is this a poor convention ?
begin
select a into v_a from tablex where id = 1;
v_b := 'Some arbitrary string: ' || v_a; -- ignore potential ORA-06502
insert into tabley(id, a) values(1, v_a); -- tablex.a and tabley.a types match
v_b := v_b || ' More arbitrary characters';
end;
/
Variable v_a
is used to interface a database table column and therefore uses a %type
attribute. But if I know the data type is varchar2
why shouldn't I use varchar2(4000)
or varchar2(32767)
that also guarantee the string read from database column will always fit to the PL/SQL variable ? Is there any other argument against this convention except the superiority of %type
attribute ?
Variable v_b
is only used in PL/SQL code and is usually returned to a JDBC client (Java/Python program, Oracle SOA/OSB etc.) or dumped into a flat file (with UTL_FILE
). If the varchar2
presents e.g. csv-line why I should bother to calculate the exact maximum possible length (except to verify the line will fit into 32767 bytes in all cases so I don't need a clob
) and re-calculate every time my data model changes ?
There is plenty of questions that covers varchar2
length semantics in SQL and explains why varchar2(4000)
is a poor practice in SQL. Also the difference between SQL and PL/SQL varchar2
-type is well covered:
The only place where I have seen this issue discussed is the points #3 and #4 in an answer by APC:
The database uses the length of a variable when allocating memory for PL/SQL collections. As that memory comes out of the PGA supersizing the variable declaration can lead to programs failing because the server has run out of memory.
There are similar issues with the declaration of single variables in PL/SQL programs, it is just that collections tend to multiply the problem.
E.g. Oracle PL/SQL Programming, 5th Edition By Steven Feuerstein doesn't mention any drawbacks of declaring too long varchar2
variables, so it can't be a critical mistake, right ?
Update
After some more googling I found out that Oracle documentation has evolved during releases:
A quote from PL/SQL User's Guide and Reference 10g Release 2 Chapter 3 PL/SQL Datatypes:
Small VARCHAR2 variables are optimized for performance, and larger ones are optimized for efficient memory use. The cutoff point is 2000 bytes. For a VARCHAR2 that is 2000 bytes or longer, PL/SQL dynamically allocates only enough memory to hold the actual value. For a VARCHAR2 variable that is shorter than 2000 bytes, PL/SQL preallocates the full declared length of the variable. For example, if you assign the same 500-byte value to a VARCHAR2(2000 BYTE) variable and to a VARCHAR2(1999 BYTE) variable, the former takes up 500 bytes and the latter takes up 1999 bytes.
A quote from PL/SQL User's Guide and Reference 11g Release 1 Chapter 3 PL/SQL Datatypes:
For a CHAR variable, or for a VARCHAR2 variable whose maximum size is less than 2,000 bytes, PL/SQL allocates enough memory for the maximum size at compile time. For a VARCHAR2 whose maximum size is 2,000 bytes or more, PL/SQL allocates enough memory to store the actual value at run time. In this way, PL/SQL optimizes smaller VARCHAR2 variables for performance and larger ones for efficient memory use.
For example, if you assign the same 500-byte value to VARCHAR2(1999 BYTE) and VARCHAR2(2000 BYTE) variables, PL/SQL allocates 1999 bytes for the former variable at compile time and 500 bytes for the latter variable at run time.
But PL/SQL User's Guide and Reference 11g Release 2 Chapter 3 PL/SQL Datatypes doesn't mention memory allocation any more and I fail to find any other information about memory allocation at all. (I'm using this release so I check only 11.2 documentation.) The same holds also for PL/SQL User's Guide and Reference 12c Release 1 Chapter 3 PL/SQL Datatypes.
I also found an answer by Jeffrey Kemp that addresses this question too. However Jeffrey's answer refers to 10.2 documentation and the question is not about PL/SQL at all.
Upvotes: 11
Views: 5565
Reputation: 10648
It looks like this is one of the areas where the PL/SQL functionality has evolved over releases when Oracle has implemented different optimizations.
Note this also means some of the answers listed in the OP are also release specific even that is not explicitly mentioned in those questions/answers. When the time pass by and use of older Oracle releases ends (me daydreaming ?) that information will became outdated (might take decades thought).
The conclusion above is backed with the following quote from chapter 12 Tuning PL/SQL Applications for Performance of PL/SQL Language Reference 11g R1:
Declare VARCHAR2 Variables of 4000 or More Characters
You might need to allocate large VARCHAR2 variables when you are not sure how big an expression result will be. You can conserve memory by declaring VARCHAR2 variables with large sizes, such as 32000, rather than estimating just a little on the high side, such as by specifying 256 or 1000. PL/SQL has an optimization that makes it easy to avoid overflow problems and still conserve memory. Specify a size of more than 4000 characters for the VARCHAR2 variable; PL/SQL waits until you assign the variable, then only allocates as much storage as needed.
This issue is no longer mentioned in 11g R2 nor 12c R1 version of the document. This is in line with the evolution of the chapter 3 PL/SQL Datatypes.
Answer:
Since 11gR2 it makes no difference from memory use of point of view to use varchar2(10)
or varchar2(32767)
. Oracle PL/SQL compiler will take care of the dirty details for you in an optimal fashion !
For releases prior to 11gR2 there is a cutoff-point where different memory management strategies are used and this is clearly documented in each release's PL/SQL Language Reference.
The above only applies to PL/SQL-only variables when there is no natural length restriction that can be derived from the problem domain. If a varchar2-variable represents a GTIN-14 then one should declare that as varchar2(14)
.
When PL/SQL-variable interfaces with a table column use %type
-attribute as that is the zero-effort way to keep you PL/SQL-code and database structure in sync.
Memory test results:
I run a memory analysis in Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 with the following results:
str_size iterations UGA PGA
-------- ---------- ----- ------
10 100 65488 0
10 1000 65488 65536
10 10000 65488 655360
32767 100 65488 0
32767 1000 65488 65536
32767 10000 65488 655360
Because the PGA changes are identical and depend only on iterations
and not str_size
I conclude the varchar2 declared size doesn't matter. The test might be too naïve though - comments welcome !
The test script:
-- plsql_memory is a convenience package wrapping sys.v_$mystat s and
-- sys.v_$statname tables written by Steven Feuerstein and available in the
-- code-zip file accompanying his book.
set verify off
define str_size=&1
define iterations=&2
declare
type str_list_t is table of varchar2(&str_size);
begin
plsql_memory.start_analysis;
declare
v_strs str_list_t := str_list_t();
begin
for i in 1 .. &iterations
loop
v_strs.extend;
v_strs(i) := rpad(to_char(i), 10, to_char(i));
end loop;
plsql_memory.show_memory_usage;
end;
end;
/
exit
Test run example:
$ sqlplus -SL <CONNECT_STR> @memory-test.sql 32767 10000
Change in UGA memory: 65488 (Current = 1927304)
Change in PGA memory: 655360 (Current = 3572704)
PL/SQL procedure successfully completed.
$
Upvotes: 6
Reputation: 5636
Maybe it's because I came of age in the era when the most memory any system had was 48k and then, happy days, up to a full 64k. And it was not virtual, what you allocated is what you got (wyaiwyg) (WAY-WIG?). I see in a lot of younger programmers a tendency to be lazy in their design and hide design flaws by throwing more memory at it.
If we get into the habit of just typing varchar2(MAX)
whenever we define a string variable, we stop thinking about the length. But sometimes length matters. If we haven't already done so, as soon as we type the (
then we should stop and put some thought into how big it really needs to be. Does the size matter here? If so, what is a reasonable maximum (or minimum)? This forces us to look beyond the bytes and fields and indexes to the actual "thing" we are trying to work with. That is never a bad idea.
Discipline is hard. We should be developing habits to enforce it whenever we can. Good data and code design is difficult by nature. There are tools and techniques we can use to make it a little easier, but we shouldn't be doing anything just because it is easier. That's the path to the Dark Side and it catches up to us sooner or later.
Upvotes: 1
Reputation: 156978
I think the problem is the same as in any other piece of software: allocating too much memory will decrease performance and likeliness of failing due to the facts it has occupied all memory.
In my opinion, use %type
as much as possible since it prevents mistakes when changing data type or length, and makes clear what the origin is from that variable.
Upvotes: 0