Elm
Elm

Reputation: 1407

Redshift copy command from S3 works, but no data uploaded

I am using the copy command to copy a file (.csv.gz) from AWS S3 to Redshift

copy sales_inventory from 's3://[redacted].csv.gz' CREDENTIALS '[redacted]' COMPUPDATE ON DELIMITER ',' GZIP IGNOREHEADER 1 REMOVEQUOTES MAXERROR 30 NULL 'NULL' TIMEFORMAT 'YYYY-MM-DD HH:MI:SS' ;

I don't receive any errors, just '0 rows loaded successfully'. I checked the easy things: double checked the file's content, made sure I was targeting the right file with the copy command. Then I created a simple one row example file to try and it didn't work. I've been using a copy command template I made a long time ago and that has worked very recently.

Any common mistakes I might have overlooked? Any way other than the example file that I could try?

Thanks.

Upvotes: 0

Views: 5451

Answers (1)

Masashi M
Masashi M

Reputation: 2757

With IGNOREHEADER 1 option, Redshift will regard the first line as a header and skip it. If there is just one line in the file, you should take this option off.

If your file contains multiple records, you might have a data load error. Since you're specifying MAXERROR 30, Redshift will skip invalid records up to 30 records and return success result. The load error information during the copy would be stored in STL_LOAD_ERRORS table. Try SELECT * FROM STL_LOAD_ERRORS order by starttime desc limit 10; to check if you had load errors.

Upvotes: 2

Related Questions