Reputation: 3677
I can copy data from a s3 bucket into a redshift table using psycopg2
:
import psycopg2
sql = """ copy table1 from 's3://bucket/myfile.csv'
access_key_id 'xxxx'
secret_access_key 'xxx' DELIMITER '\t'
timeformat 'auto'
maxerror as 250 GZIP IGNOREHEADER 1 """
cur.execute(sql)
How do I string multiple redshift statements to do these three things:
I tried the following:
sql = """ copy table1 from 's3://bucket/myfile.csv'
access_key_id 'xxxx'
secret_access_key 'xxx' DELIMITER '\t'
timeformat 'auto'
maxerror as 250 GZIP IGNOREHEADER 1
create table table2 as table1
drop table table1"""
I don't get back any error, but the table is not created, only the copy is working from above. What am I doing wrong in my sql?
Upvotes: 5
Views: 11063
Reputation: 5739
Following code does Copy from Table1
to Table2
by creating a duplicate Copy. Then, it deletes Table1
.
import psycopg2
def redshift():
conn = psycopg2.connect(dbname='***', host='******.redshift.amazonaws.com', port='5439', user='****', password='*****')
cur = conn.cursor();
cur.execute("create table table2 as select * from table1;")
cur.execute(" drop table table1;")
print("Copy executed fine!")
redshift()
Upvotes: 2