Reputation: 6350
What's the best practice for obtaining the AWS credentials needed for executing a Redshift copy command from S3? I'm automating the ingestion process from S3 into Redshift by having a machine trigger the Copy command.
I know it's recommended to use IAM roles on ec2 hosts so that you do not need to store AWS credentials. How would it work though with the Redshift copy command? I do not particularly want the credentials in the source code. Similarly the hosts are being provisioned by Chef and so if I wanted to set the credentials as environment variables they would be available in the Chef scripts.
Upvotes: 2
Views: 553
Reputation: 133
You need credentials to use COPY command, if your question is around, how you get those credentials, from the host where you are running the program, you can get the metadata of the IAM role and use the access key, secret key and token. You can parameterize it on the fly just before COPY command and use them in the COPY command.
export ACCESSKEY=curl -s http://169.254.169.254/IAMROLE | grep '"AccessKeyId" : *' | cut -f5 -d " " | cut -b2- | rev | cut -b3- | rev
export SECRETKEY=curl -s http://169.254.169.254/IAMROLE | grep '"SecretAccessKey" : *' | cut -f5 -d " " | cut -b2- | rev | cut -b3- | rev
export TOKEN=curl -s http://169.254.169.254/IAMROLE | grep '"Token" : *' | cut -f5 -d " " | rev | cut -b2- | rev
Upvotes: 2
Reputation: 6350
Seems like the recommended way (as suggested by an AWS developer) is to use temporary credentials by calling AWS Security Token Service (AWS STS).
I'm yet to implement this but this is the approach I'll be looking to take.
Upvotes: 0