Dennis Du
Dennis Du

Reputation: 1

have a table with 1000+ columns, how to load data into snowflake table with parquet files without explicitly specify the column name

for the snowflake document, it has 3 columns with loading from a parquet file then you can use: "copy into cities from (select $1:continent::varchar, $1:country:name::varchar, $1:country:city::variant from @sf_tut_stage/cities. parquet); "

If have 1000+ columns, can I not list all the columns like $1:col1, $1:col2...$1:co1000?

Upvotes: 0

Views: 297

Answers (2)

Mat
Mat

Reputation: 601

First number of columns(if possible name of the columns) should match with the staged parquet file.

Create a target table from your parquet file:

select *
  from table(
    infer_schema(
      location=>'@mystage'
      , file_format=>'my_parquet_format'
      )
    );```

Now load the data into parquet table
```copy into target from @stage/filename.parquet
  on_error = 'continue',
  FILE_FORMAT = 'parquet_format',
  MATCH_BY_COLUMN_NAME = 'CASE_INSENSITIVE';```

Upvotes: 0

P Needleman
P Needleman

Reputation: 247

you may want to check out our INFER_SCHEMA function to dynamically obtain the columns/datatypes. https://docs.snowflake.com/en/sql-reference/functions/infer_schema.html The expression column should be able to get you 95% of the way there.

select *
  from table(
    infer_schema(
      location=>'@mystage'
      , file_format=>'my_parquet_format'
      )
    );

Upvotes: 0

Related Questions