Rwuan
Rwuan

Reputation: 3

PostgreSQL allocated column length

I have an issue where I have one column in a database that might be anything from 10 to 10,000 bytes in size. Do you know if PostgreSQL supports sparse data (i.e. will it always set aside the 10,000 bytes fore every entry in the column ... or only the space that is required for each entry)?

Upvotes: 0

Views: 838

Answers (2)

Denis de Bernardy
Denis de Bernardy

Reputation: 78443

Postgres will store long varlena types in an extended storage called TOAST.

In the case of strings, it keep things inline up to 126 bytes (potentially means less 126 characters for multibyte stuff), and then sends it to the external storage.

You can see where the data is stored using psql:

\dt+ yourtable

As an aside, note that from Postgres' standpoint, there's absolutely no difference (with respect to storage) between declaring a column's type as varchar or varchar(large_number) -- it'll be stored in the exact same way. There is, however, a very slight performance penalty in using varchar(large_number) because of the string length check.

Upvotes: 2

Bohemian
Bohemian

Reputation: 425003

use varchar or text types - these use only the space actually required to store the data (plus a small overhead of 2 bytes for each value to store the length)

Upvotes: 0

Related Questions