Bitanshu Das
Bitanshu Das

Reputation: 627

Copying specific Columns in Amazon Redshift from S3 databucket

I have a file in S3 with columns like

CustomerID   CustomerName   ProductID    ProductName   Price   Date

Now the existing SQL table structure in Redshift is like

Date  CustomerID   ProductID    Price

Is there a way to copy the selected data into the existing table structure? The S3 database doesn't have any headers, just the data in this order.

Upvotes: 8

Views: 18905

Answers (3)

HIMANSHU JAIN
HIMANSHU JAIN

Reputation: 41

Yes, there's a way. What you can do is that you can create an external table having the same schema as your file (CustomerID CustomerName ProductID ProductName Price Date) with the S3 location of the file. Then you can run an Insert Into Query Selecting the specific columns as required from the external table.

Note : To create an external table in Redshift, you need to first create an external schema.

Upvotes: 1

BigDataKid
BigDataKid

Reputation: 1227

This is for the case where the file has more columns than the target load table.

Assuming that CustomerName and ProductName can be NULL fields you have two options.

Load the data in a staging table. Then join the staging table with the reference data to insert data into

COPY staging-tablename
FROM 's3://<<YOUR-BUCKET>>/<<YOUR-FILE>>'
credentials 'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret- access-key>'; 

INSERT INTO
 main_tablename
SELECT Date  
  ,CustomerID   
  ,ProductID    
  ,Price
FROM  staging-tablename st;

TRUNCATE TABLE staging-tablename;

ANALYZE main_tablename;

Upvotes: 7

BigDataKid
BigDataKid

Reputation: 1227

This is for the case where the file has fewer columns than the target load table.

Assuming that CustomerName and ProductName can be NULL fields you have two options.

Option #1 - Load Directly on the table

    COPY main_tablename
    (Date  
    ,CustomerID   
    ,ProductID    
    ,Price)
    FROM 's3://<<YOUR-BUCKET>>/<<YOUR-FILE>>'
    credentials 'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret- access-key>';

ANALYZE main_tablename;

Option #2 -- Load the data in a staging table. Then join the staging table with the reference data to insert data into

    COPY staging-tablename
    (Date  
    ,CustomerID   
    ,ProductID    
    ,Price)
    FROM 's3://<<YOUR-BUCKET>>/<<YOUR-FILE>>'
    credentials 'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret- access-key>'; 

 INSERT INTO
     main_tablename
SELECT st.CustomerID   
      ,cust.CustomerName   
      ,st.ProductID    
      ,prod.ProductName   
      ,st.Price   
      ,st.Date
FROM  staging-tablename st
INNER JOIN  customer-tablename cust ON ( cust.CustomerID = st.CustomerID)
INNER JOIN  product-tablename prod ON ( prod.ProductID  = st.ProductID );

TRUNCATE TABLE staging-tablename;

ANALYZE main_tablename;

Upvotes: 8

Related Questions