yakya
yakya

Reputation: 5210

VARCHAR length any reason to (2^n) - 1?

So I've been specifying the length (2^n)-1 anytime I declare varchar or nvarchar but I wonder whether that it is a good practice or not. This turned into a habit for me actually to define everything as binary in software development because of the obvious reasons but SQL Server seems to surprise me sometimes (max length of nvarchar being not 4095 or 4096 but 4000 for example..) I wonder whether there is a value in what I'm doing.

Does defining the variables' length like this have any value? I guess it would have value in the old times.

EDIT: If it doesn't have any performance improvements, why does system views use these kind of lengths (you can check any of them)? For example, I checked INFORMATION_SCHEMA.COLUMNS and the columns are:

Column_name                 Type        Length
----------------------------------------------
TABLE_CATALOG               nvarchar    256
TABLE_SCHEMA                nvarchar    256
TABLE_NAME                  sysname     256
COLUMN_NAME                 sysname     256
ORDINAL_POSITION            int         4
COLUMN_DEFAULT              nvarchar    8000
IS_NULLABLE                 varchar     3
DATA_TYPE                   nvarchar    256
CHARACTER_MAXIMUM_LENGTH    int         4
CHARACTER_OCTET_LENGTH      int         4
NUMERIC_PRECISION           tinyint     1
NUMERIC_PRECISION_RADIX     smallint    2
NUMERIC_SCALE               int         4
DATETIME_PRECISION          smallint    2
CHARACTER_SET_CATALOG       sysname     256
CHARACTER_SET_SCHEMA        sysname     256
CHARACTER_SET_NAME          sysname     256
COLLATION_CATALOG           sysname     256
COLLATION_SCHEMA            sysname     256
COLLATION_NAME              sysname     256
DOMAIN_CATALOG              sysname     256
DOMAIN_SCHEMA               sysname     256
DOMAIN_NAME                 sysname     256

Upvotes: 2

Views: 1297

Answers (1)

Damien_The_Unbeliever
Damien_The_Unbeliever

Reputation: 239636

A (2^n)-1 rules makes sense if strings are being stored using the C convention of single-byte characters terminated by a single nul byte. But so far as I'm aware, that's never been how SQL Server stores its strings.

In SQL Server, the length of the string is explicitly stored (either in the schema for fixed-length types or within the row data for variable-length types) and so, no, this convention does not make sense here.


When looking at the INFORMATION_SCHEMA views, almost all of the character columns are (directly or indirectly) using the sysname data type - so it's not the case of them repeatedly choosing to use 2^n sizes - they decided that sysname should be nvarchar(128) (very old versions of SQL Server used a much smaller size limit, about 12 characters if memory serves).

Upvotes: 4

Related Questions