wlodarmt
wlodarmt

Reputation: 1

Working around the 1600 column max in Postgres during csv import

I am trying to import a massive csv file (nearly 10000 columns) into a postgres database. Normally to import a file I would copy it to a temporary table and then use that temp table to place everything in its correct place. However postgres has a max columns of 1600 so the csv doesn't import. I don't have any control over the size of the csv or how it is formatted so I need to work with it.

Is there either a way to increase this value for temporary tables, or to use the copy command to parse into multiple temporary tables? I would also be ok using another way of importing the file if you have any suggestions.

Any advice? I am currently using the Copy command to import the csv:

COPY INTAKE
FROM 'file location'
CSV HEADER;

Thank you for your help!

Upvotes: 0

Views: 1362

Answers (1)

AdamKG
AdamKG

Reputation: 14081

Import it all as a single column, as described here:

https://stackoverflow.com/a/60120879/16361

Once you have that, you can put together a query that will generate SQL to extract the fields that you want in the way that you want. A small example with 5 columns and a max table size of 2 columns (IRL, you'd want to max_cols_per_table in the code block following this one to a large number, like 1kish, so you're creating 10 tables instead of 5000).

$ cat /tmp/fivewide.csv
c1,c2,c3,c4,c5
r1c1,r1c2,r1c3,r1c4,r1c5
r2c1,r2c2,r2c3,r2c4,r2c5
$ psql -X testdb
psql (12.3 (Debian 12.3-1), server 10.5 (Debian 10.5-1+build4))
Type "help" for help.

testdb=# create table my_import_table(data text);
CREATE TABLE
testdb=# \copy my_import_table from /tmp/fivewide.csv csv delimiter e'\x01' quote e'\x02'
COPY 3
testdb=# select * from my_import_table;
           data           
--------------------------
 c1,c2,c3,c4,c5
 r1c1,r1c2,r1c3,r1c4,r1c5
 r2c1,r2c2,r2c3,r2c4,r2c5
(3 rows)

Generating the CREATE TABLE statments so it's split into actual columns (this will break if there are commas in the values; I'm not going to implement a full CSV parser in SQL :)

testdb=# with
settings as (select 2 as max_cols_per_table, 'my_import_table' as import_table, 'data' as column_name),
computed_settings1 as (select array_length(string_to_array(d, ','), 1) as num_cols from t limit 1),
computed_settings2 as (select (ceil((select num_cols::float from computed_settings1) / (select max_cols_per_table from settings)))::integer as num_tables),
columns_exprs as (select i, '(string_to_array('||(select column_name from settings)||$$, ','))[$$||i||'] AS col'||i as cexpr from generate_series(1, (select num_cols from computed_settings1)) as i),
column_exprs_by_table as (select t, cexpr from generate_series(1, (select num_tables from computed_settings2)) as t join columns_exprs ce
on i>((t-1)*(select max_cols_per_table from settings))
AND i<=(t*(select max_cols_per_table from settings))
),
create_table_stmts as (select 'create table t_'||t||' AS SELECT '||string_agg(cexpr, ', ')||' FROM '||(select import_table from settings)||';' from column_exprs_by_table group by t)
select * from create_table_stmts;
                                                             ?column?                                                              
-----------------------------------------------------------------------------------------------------------------------------------
 create table t_3 AS SELECT (string_to_array(data, ','))[5] AS col5 FROM my_import_table;
 create table t_2 AS SELECT (string_to_array(data, ','))[3] AS col3, (string_to_array(data, ','))[4] AS col4 FROM my_import_table;
 create table t_1 AS SELECT (string_to_array(data, ','))[1] AS col1, (string_to_array(data, ','))[2] AS col2 FROM my_import_table;
(3 rows)

Executing this DDL with DO and inspecting the results:

testdb=# DO $outer$
DECLARE
stmt text;
BEGIN
FOR stmt IN
with
settings as (select 2 as max_cols_per_table, 'my_import_table' as import_table, 'data' as column_name),
computed_settings1 as (select array_length(string_to_array(d, ','), 1) as num_cols from t limit 1),
computed_settings2 as (select (ceil((select num_cols::float from computed_settings1) / (select max_cols_per_table from settings)))::integer as num_tables),
columns_exprs as (select i, '(string_to_array('||(select column_name from settings)||$$, ','))[$$||i||'] AS col'||i as cexpr from generate_series(1, (select num_cols from computed_settings1)) as i),
column_exprs_by_table as (select t, cexpr from generate_series(1, (select num_tables from computed_settings2)) as t join columns_exprs ce
on i>((t-1)*(select max_cols_per_table from settings))
AND i<=(t*(select max_cols_per_table from settings))
),
create_table_stmts as (select 'create table t_'||t||' AS SELECT '||string_agg(cexpr, ', ')||' FROM '||(select import_table from settings)||';' from column_exprs_by_table group by t)
select * from create_table_stmts
LOOP
EXECUTE stmt;
END LOOP;
END;
$outer$;
DO
testdb=# select * from t_1;
 col1 | col2 
------+------
 c1   | c2
 r1c1 | r1c2
 r2c1 | r2c2
(3 rows)

testdb=# select * from t_2;
 col3 | col4 
------+------
 c3   | c4
 r1c3 | r1c4
 r2c3 | r2c4
(3 rows)

testdb=# select * from t_3;
 col5 
------
 c5
 r1c5
 r2c5
(3 rows)

Further improvements, perhaps to be smart about using row 1 as column names, or to always include some column that would be useful for joining the tables, are left as an exercise to the reader.

Upvotes: 1

Related Questions