Reputation: 217
I'm trying to use Ansible to download some files to my various EC2 instances. The problem I'm having is when it comes to my AWS credentials. The AWS Ansible modules all work great, including the S3 module. The following (when I substitute in my AWS credentials) works like a charm.
- name: upload data import file
s3: aws_access_key=<accesskey> aws_secret_key=<secretkey> bucket=my-bucket object=/data.zip mode=get
However, I need Ansible playbooks and roles I'm writing to be utilized by anyone, and I don't want to have any AWS credentials hardcoded. Everywhere else I use the Ansible AWS modules, I've eliminated aws_access_key and aws_secret_key and it works just fine as Ansible looks for those values in environment variables. However, with every other use, I'm running them as local actions. So, it's pulling the credentials from my local machine, which is what I want. The problem is when I'm running the S3 module on one of my instances, if I eliminate the credential parameters, I get:
failed: [54.173.19.238] => {"failed": true}
msg: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
I imagine that this is because since I've not specified the credentials, it's looking for them in environment variables on my instance, where they are not set. Nor would I want to set them in environment variables on the instance.
Is there a way I can download a file from S3 with ansible and not have to specify my AWS credentials?
Upvotes: 4
Views: 7567
Reputation: 2145
In a EC2 instance the best way to authorise running code to access AWS resource is to use IAM Role.
You assign a role to any instance when starting it. Any policy can be set to the role.
Inside the instance any process can connect to a known URL to retrieve temps keys in order to authenticate to any AWS service.
Boto, the Python library used by the Ansible S3 module, has automatic support for IAM roles. So if no key is provided directly or in the environment variable, Boto will query the known URL to get the instance key.
More details on how IAM roles work can be found here: http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-usingrole-ec2instance.html#role-usecase-ec2app-permissions
Upvotes: 2
Reputation: 23801
S3 module in ansible doesn't support the profile option, but you can use like this, if you have exported the aws_key and aws_secret as variables:
export aws_key="AAAAAAAAAAAAAAAAAAAAAAAAAA"
export aws_secret="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
Then you can use them like this:
s3:
aws_access_key: "{{ lookup('env','aws_key') }}"
aws_secret_key: "{{ lookup('env','aws_secret') }}"
bucket: "my-bucket"
object: "/data.zip"
mode: get
Hope this will help you or anyone, who is looking for, to use the local environment variables inside the ansible playbook. Thanks
Upvotes: 3
Reputation: 20759
If you have the AWS_SECRET_KEY and AWS_ACCESS_KEY environment variables set on your ansible host then you could potentially pass these as variables on the ansible command line so that you could then reference them in your playbook:
$ ansible-playbook playbook.yml --extra-vars="mykey=${AWS_ACCESS_KEY} mysecret=${AWS_SECRET_KEY}"
If you're invoking your playbook from a script then this is potentially a good way of doing it. Another approach would be to read those variables within your playbook and then reference them that way. I haven't tried this myself, but something along these lines should work:
- name: get AWS_ACCESS_KEY
local_action: shell echo ${AWS_ACCESS_KEY}
register: mykey
- name: get AWS_SECRET_KEY
local_action: shell echo ${AWS_SECRET_KEY}
register: mysecret
- name: upload data import file
s3: aws_access_key={{ mykey.stdout }} aws_secret_key={{ mysecret.stdout }} bucket=my-bucket object=/data.zip mode=get
Upvotes: 2