alim1990
alim1990

Reputation: 4972

Python and Snowflake error on appending into existing table on the cloud

I am trying to upload a dataframe into an existing table in snowflake cloud. Here is the dataframe:

columns_df.head()

enter image description here

Now when using the to_sql() from pandas to append data into existing table:

columns_df.to_sql('survey_metadata_column_names', index=False,  index_label=None, con=conn, schema='PUBLIC', if_exists='append', chunksize=300)

I received the following error:

DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': not all arguments converted during string formatting

TypeError: not all arguments converted during string formatting

Some of the column names contains dashes and underscores.

Upvotes: 2

Views: 7877

Answers (2)

Oliver
Oliver

Reputation: 8572

From the snowflake documentation.

To write data from a Pandas DataFrame to a Snowflake database, do one of the following:

  • Call the write_pandas() function.
  • Call the pandas.DataFrame.to_sql() method, and specify pd_writer as the method to use to insert the data into the database.

Note the highlighted text in the second bullet. I have noticed several issues using both methods, but they are the official solutions.

from snowflake.connector.pandas_tools import pd_writer
columns_df.to_sql('survey_metadata_column_names', 
                 index = False,  
                 index_label = None, 
                 con = Engine, #Engine should be an SQLAlchemy engine 
                 schema = 'PUBLIC', 
                 if_exists = 'append', 
                 chunksize = 300,
                 method = pd_writer)

Or alternatively

from snowflake.connector.pandas_tools import write_pandas
con = snowflake.connector.connect(...)
success, nchunks, nrows, _ = write_pandas(con, 
                                          columns_df, 
                                          'survey_metadata_column_names', 
                                          chunk_size = 300, 
                                          schema = 'PUBLIC')

Note that the first method requires an SQLAlchemy engine while the second one can use a regular connection.

Upvotes: 3

June
June

Reputation: 730

Have a look at the solution I posted here that allows for write (create and replace) and appends): write_pandas snowflake connector function is not able to operate on table

Upvotes: 1

Related Questions