rp346
rp346

Reputation: 7028

Copy from s3 to redshift

I am loading data to redshift from s3, using MANIFEST to specify load because I have to load 8k files (total dataset size ~1TB)

I am using SQLWorkbench to load this dataset, I am setting MAXERROR = 100000, but actual error occurring is greater than 100000 (MAXERROR=100000). I think SQLWorkbench had MAXERROR limit to 100000.

Is there any better way to do this ? any suggestion ?

Upvotes: 0

Views: 1677

Answers (1)

Joe Harris
Joe Harris

Reputation: 14035

If you actually have more than 100,000 errors in your data being imported I would suggest you need to go back to the source and correct the files. If that's not possible then you could try loading the data into a table with the problematic columns set to VARCHAR(MAX) and then you can convert them inside Redshift.

Upvotes: 1

Related Questions