another geek
another geek

Reputation: 361

How to upload a folder to aws s3 recursivly using ansible

I'm using ansible to deploy my application. I'm came to the point where I want to upload my grunted assets to a newly created bucket, here is what I have done: {{hostvars.localhost.public_bucket}} is the bucket name, {{client}}/{{version_id}}/assets/admin is the path to a folder containing Multi-levels folders and assets to upload:

- s3:
    aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
    aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
    bucket: "{{hostvars.localhost.public_bucket}}"
    object: "{{client}}/{{version_id}}/assets/admin"
    src: "{{trunk}}/public/assets/admin"
    mode: put

Here is the error message:

   fatal: [x.y.z.t]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "s3"}, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n  File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 2868, in <module>\r\n    main()\r\n  File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 561, in main\r\n    upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n  File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 307, in upload_s3file\r\n    key.set_contents_from_filename(src, encrypt_key=encrypt, headers=headers)\r\n  File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1358, in set_contents_from_filename\r\n    with open(filename, 'rb') as fp:\r\nIOError: [Errno 21] Is a directory: '/home/abcd/efgh/public/assets/admin'\r\n", "msg": "MODULE FAILURE", "parsed": false}

I went through the documentation and I didn't find recursing option for ansible s3_module. Is this a bug or am I missing something?

Upvotes: 5

Views: 10489

Answers (4)

toast38coza
toast38coza

Reputation: 9076

As of Ansible 2.3, you can use: s3_sync:

- name: basic upload
  s3_sync:
    bucket: tedder
    file_root: roles/s3/files/

Note: If you're using a non-default region, you should set region explicitly, otherwise you get a somewhat obscure error along the lines of: An error occurred (400) when calling the HeadObject operation: Bad Request An error occurred (400) when calling the HeadObject operation: Bad Request

Here's a complete playbook matching what you were trying to do above:

- hosts: localhost
  vars:
    aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
    aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"    
    bucket: "{{hostvars.localhost.public_bucket}}"
  tasks:
  - name: Upload files
    s3_sync:
      aws_access_key: '{{aws_access_key}}'
      aws_secret_key: '{{aws_secret_key}}'
      bucket: '{{bucket}}'
      file_root: "{{trunk}}/public/assets/admin"
      key_prefix: "{{client}}/{{version_id}}/assets/admin"
      permission: public-read
      region: eu-central-1

Notes:

  1. You could probably remove region, I just added it to exemplify my point above
  2. I've just added the keys to be explicit. You can (and probably should) use environment variables for this:

From the docs:

If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence AWS_URL or EC2_URL, AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY or EC2_ACCESS_KEY, AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY or EC2_SECRET_KEY, AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN, AWS_REGION or EC2_REGION

Upvotes: 11

notatoad
notatoad

Reputation: 395

I was able to accomplish this using the s3 module by iterating over the output of the directory listing i wanted to upload. The little inline python script i'm running via the command module just outputs the full list if files paths in the directory, formatted as JSON.

-  name: upload things
   hosts: localhost
   connection: local

   tasks:
     - name: Get all the files in the directory i want to upload, formatted as a json list
       command: python -c 'import os, json; print json.dumps([os.path.join(dp, f)[2:] for dp, dn, fn in os.walk(os.path.expanduser(".")) for f in fn])'
       args:
           chdir: ../../styles/img
       register: static_files_cmd

     - s3:
           bucket: "{{ bucket_name }}"
           mode: put
           object: "{{ item }}"
           src: "../../styles/img/{{ item }}"
           permission: "public-read"
       with_items: "{{ static_files_cmd.stdout|from_json }}"

Upvotes: 3

Abdelaziz Dabebi
Abdelaziz Dabebi

Reputation: 1638

By using ansible, it looks like you wanted something idempotent, but ansible doesn't support yet s3 directory uploads or any recursion, so you probably should use the aws cli to do the job like this:

command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"

Upvotes: 2

Piyush Patil
Piyush Patil

Reputation: 14533

The ansible s3 module does not support directory uploads, or any recursion. For this tasks, I'd recommend using s3cmd check below syntax.

command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive"

Upvotes: 3

Related Questions