Kuba
Kuba

Reputation: 936

Run aws_s3 task on remote with environment credentials from executor

I would like to upload a file from remote host to a s3 bucket but with credentials from the local execution environment. Is that possible?

- name: Upload file
   host: '{{target}}'
   gather_facts : False
   tasks:
   - name: copy file to bucket
     become: yes
     aws_s3:
       bucket={{bucket_name}}
       object={{filename}}
       src=/var/log/{{ filename }}
       mode=put

Is there any switch, option I could use?. The best would be something like that:

AWS_PROFILE=MyProfile ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile

So I could specify the profile from my own local .aws/config file.

Obviously when running the playbook like this:

ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile

I'm getting the following error:

TASK [copy file to bucket] ******************************************************************************************************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NoCredentialsError: Unable to locate credentials
fatal: [somehost]: FAILED! => {"boto3_version": "1.7.50", "botocore_version": "1.10.50", "changed": false, "msg": "Failed while looking up bucket (during bucket_check) adverity-trash.: Unable to locate credentials"}

But when I try the following:

 AWS_ACCESS_KEY=<OWN_VALID_KEY> AWS_SECRET_KEY=<OWN_VALID_SECRET> ansible-playbook upload_file.yml -e target=somehost -e bucket_name=mybucket -e filename=myfile

It's the same error.

Ansible v2.6

Upvotes: 3

Views: 3783

Answers (3)

Kuba
Kuba

Reputation: 936

He're a satisfying solution to my problem.

With help of @einarc and the ansible hostvars I was able to achieve a remote upload capability with credentials comming from local environment The facts gathering was not necessary and I used delegate_to to do some tasks locally. Everything is in one playbook

- name: Transfer file
  hosts: '{{ target }}'
  gather_facts : False
  tasks:
  - name: Set AWS KEY ID
    set_fact: aws_key_id="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
    delegate_to: 127.0.0.1
  - name: Set AWS SECRET
    set_fact: aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
    delegate_to: 127.0.0.1
  - name: Get AWS KEY ID
    set_fact: aws_key_id={{hostvars[inventory_hostname]['aws_key_id']}}
  - name: Get AWS SECRET KEY
    set_fact: aws_secret_key={{hostvars[inventory_hostname]['aws_secret_key']}}
  - name: ensure boto is available
    become: true
    pip: name=boto3 state=present
  - name: copy file to bucket
    become: yes
    aws_s3:
      aws_access_key={{aws_key_id}}
      aws_secret_key={{aws_secret_key}}
      bucket=my-bucket
      object={{filename}}
      src=/some/path/{{filename}}
      mode=put

Bonus: I found a way to not explicitly put the aws credentials in command line.

I've used the following bash wrapper to get the credentials from config file with the help of aws-cli.

#!/bin/bash
AWS_ACCESS_KEY_ID=`aws configure get aws_access_key_id --profile $1`
AWS_SECRET_ACCESS_KEY=`aws configure get aws_secret_access_key --profile $1`

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \ 
AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
ansible-playbook transfer_to_s3.yml -e target=$2 -e filename=$3

Upvotes: 2

eco
eco

Reputation: 1374

The problem here is : How do I pass environment variables from one host to another. The answer is in hostvars. Feel free to do your own search on hostvars but this'll give a general idea: https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-see-all-the-inventory-vars-defined-for-my-host

Step 1: GATHER the AWS environment credentials from localhost(where you're running ansible from). IMPORTANT: Make sure to set gather_facts to TRUE otherwise the lookup Jinja2 plugin won't find the keys(assuming you've set them up as environment variables in localhost).

- name: Set Credentials
   host: localhost
   gather_facts : true
   tasks:
   - name: Set AWS KEY ID
     set_fact: AWS_ACCESS_KEY_ID="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
   - name: Set AWS SECRET
     set_fact: AWS_SECRET_ACCESS_KEY="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"

Step 2: Import those environment variables from localhost using set_fact and the hostvars Jinja2 plugin.

Step 3: Use the environment variables on {{target}}

Step 2 and 3 are put together below.

- name: Upload file
   host: '{{target}}'
   gather_facts : False
   tasks:
   - name: Get AWS KEY ID
     set_fact: aws_key_id={{hostvars['localhost']['AWS_ACCESS_KEY_ID']}}
   - name: Get AWS SECRET KEY
     set_fact: aws_secret_key={{hostvars['localhost']['AWS_SECRET_ACCESS_KEY']}}
   - name: copy file to bucket
     become: yes
     aws_s3:
       bucket={{bucket_name}}
       object={{filename}}
       src=/var/log/{{ filename }}
       mode=put
       aws_access_key='{{aws_key_id}}'
       aws_secret_key='{{aws_secret_key}}'

Upvotes: 3

Baptiste Mille-Mathias
Baptiste Mille-Mathias

Reputation: 2169

You have the error because the environment variables are not propagated to the remote host when the playbook is run.

As the documentation explains (bottom of the page) this possible to use environment variables or boto profiles with aws_s3 but they should exist on the host that perform the push.

so what I would do is

  • put the AWS variables into a variable file
  • create a boto profile file to the target generated from a template
  • launch the aws_s3 module.

vars/aws.yml

---
aws_access_key_id: 24d32dsa24da24sa2a2ss
aws_access_key: 2424dadsxxx

templates/boto.j2

[Credentials]
aws_access_key_id = {{ aws_access_key_id }}
aws_secret_access_key = {{ aws_access_key }}

playbook.yml

- name: Upload file
   host: '{{ target }}'
   gather_facts : False
   vars_files:
     - vars/aws.yml

   tasks:
     - name: push boto template
       template:
         src: boto.j2
         dest: {{ ansible_user_dir }}/.boto
         mode: 0400

     - name: copy file to bucket
       become: yes
       aws_s3:
         bucket={{bucket_name}}
         object={{filename}}
         src=/var/log/{{ filename }}
         mode=put

ps:

  • I never use boto profile hence I'm not sure how it works, so my code is juste based on educated guess.
  • It seems the aws_s3 documentation is unclear about the version of boto to use, as link is pointing to boto2 but deps are saying boto3
  • I'm not sure why you need to become for aws_s3 task

Upvotes: 0

Related Questions