Reputation: 305
I have an array that looks like this [[(Double,Double)]]
. It's a multi-dimensional array of tuples.
This is data that I will never query on, as it doesn't need to be queried. It only makes sense if it's like that on the client side. I'm thinking of storing the entire thing as string and then parsing it back to multi array.
Would that be a good approach and would the parsing be very expensive considering I can have a max of 20 arrays with 4 max inner array each with a tuple of 2 Double?
How would I check to see which is a better approach and if storing it as multi-dimensional array in PostgreSQL is the better approach?
How would I store it?
Upvotes: 1
Views: 2571
Reputation: 657387
To store an array of composite type (with any nesting level), you need a registered base type to work with. You could have a table defining the row type, or just create the type explicitly:
CREATE TYPE dd AS (a float8, b float8);
Here are some ways to construct that 2-dimensional array of yours:
SELECT ARRAY [['(1.23,23.4)'::dd]]
, (ARRAY [['(1.23,23.4)']])::dd[]
, '{{"(1.23,23.4)"}}'::dd[]
, ARRAY[ARRAY[dd '(1.23,23.4)']]
, ARRAY(SELECT ARRAY (SELECT dd '(1.23,23.4)'));
Related:
Note that the Postgres array type dd[]
can store values with any level of nesting. See:
Whether that's more efficient than just storing the string literal as text
very much depends on details of your use case.
float8
(= double precision
) occupies 8 bytes. The text string '1' occupies 2 bytes on disk and 4 bytes in RAM. text
'123.45678' occupies 10 bytes on disk and 12 bytes in RAM.text
will be read and written a bit faster than an array type of equal size.text
values are compressed (automatically), which can benefit storage size (especially with repetitive patterns) - but adds compression / decompression cost.An actual Postgres array is cleaner in any case, as Postgres does not allow illegal strings to be stored.
Upvotes: 6