Alex Cohen
Alex Cohen

Reputation: 6236

msg: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials

So I am trying to run ansible on my ec2 instances on aws, for the first time on a fresh instance, but every time I try to run a play I can't get around this error message:

PLAY [localhost]
**************************************************************

TASK: [make one instance]
***************************************************** 
failed: [localhost] => {"failed": true} msg: No handler was ready to
authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check
your credentials

FATAL: all hosts have already failed -- aborting

PLAY RECAP
********************************************************************
   to retry, use: --limit @/home/ubuntu/ans_test.retry

localhost                  : ok=0    changed=0    unreachable=0   
failed=1

I think there may be something wrong with the permissions in my IAM user and group. I have given my IAM user and group ReadOnlyAccess, AdministratorAccess and PowerUserAccess. I have an access id and secret access key that I am setting as environmental variable with the commands:

   export AWS_ACCESS_KEY_ID='AK123'
   export AWS_SECRET_ACCESS_KEY='abc123'

With 'AK123'and 'abc123' replaced with my actual id and key values. What else do I need to do in order to get the ansible ec2 task working?

UPDATE:
I fixed the problem, I guess I didn't really have a solid understanding of what environmental variables are. I fixed it by just setting my aws_access_key and aws_secret_key inside of my ec2 task, below is my working playbook

- hosts: localhost  
  connection: local  
  gather_facts: False  

  tasks:  
    #this task creates 5 ec2 instances that are all named demo and are copies of the image specified  
    - name: Provision a set of instances  
      ec2:  
         aws_access_key: .....  
         aws_secret_key: ....  
         key_name: .....  
         group: .....  
         instance_type: t2.micro  
         image: ......  
         region: us-east-1  
         ec2_url: .......  
         wait: true  
         exact_count: 5  
         count_tag:  
            Name: Demo  
         instance_tags:  
            Name: Demo  
      register: ec2  

I guess now I need to start using ansible vault to just hold my key and ID.

Upvotes: 15

Views: 23738

Answers (6)

user16454406
user16454406

Reputation: 9

Firstly always safer to not include credentials or fetching AWS credentials explicitly.

If someone benefits from my case - I had the same error as I was migrating from ansible 2.9 to 2.15 The catch was my new Ec2 instance was a Amazon Linux 2023 which has IMDSv2 enabled by default which is not supported by boto and amazon.aws collection module older than 4.x Somehow I had two reference to amazon.aws collection on my machine - 1st was pointing to collection version 1.x while another as 6.x (command to check the version of amazon.aws > ansible-galaxy collection list amazon.aws

To make use of he amazon.aws collection version 6.x we need to create a file ~/.ansible.cfg and add below content " [defaults] collections_paths = /path/to/your/collection/version/ansible_collections "

additionally had to move from amazon.aws.ec2 to amazon.aws.ec2_instance (to start target machine/instance)

Conclusion - IMDSv2 is supported by amazon.aws collection version 4.x and above + boto3 All credit to AnsibleForum and contributors for guidance

Upvotes: 0

This is how a sample .boto file should look like, anything else will give issues and lead to errors

[Credentials]

#aws_access_key_id =

#aws_secret_access_key =

Upvotes: 0

udondan
udondan

Reputation: 60079

It's worth mentioning that the ec2 module makes use of the package boto, while there is a newer module ec2_instance, which uses boto3.

Apparently there are differences in how these two packages/versions detect credentials or their environment. I have not found a solution to make the ec2 module work inside an ECS container, most probably because ECS did not exist when the last version of boto was released and therefore it doesn't have the capabilities to detect the "instance profile" of the ECS container. With ec2_instance this works out of the box without any additional configuration required.

Upvotes: 0

Andrzej Rehmann
Andrzej Rehmann

Reputation: 13900

In my case the variables must have been in quotes (single or double it does not matter).

BAD:

export AWS_ACCESS_KEY_ID=AK123
export AWS_SECRET_ACCESS_KEY=abc123

GOOD:

export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'

GOOD:

export AWS_ACCESS_KEY_ID="AK123"
export AWS_SECRET_ACCESS_KEY="abc123"

Upvotes: 3

Alex Cohen
Alex Cohen

Reputation: 6236

I fixed the problem, I guess I didn't really have a solid understanding of what environmental variables are. I fixed it by just setting my aws_access_key and aws_secret_key inside of my ec2 task, below is my working playbook

- hosts: localhost  
  connection: local  
  gather_facts: False  

  tasks:  
    #this task creates 5 ec2 instances that are all named demo and are copies of the image specified  
    - name: Provision a set of instances  
      ec2:  
         aws_access_key: .....  
         aws_secret_key: ....  
         key_name: .....  
         group: .....  
         instance_type: t2.micro  
         image: ......  
         region: us-east-1  
         ec2_url: .......  
         wait: true  
         exact_count: 5  
         count_tag:  
            Name: Demo  
         instance_tags:  
            Name: Demo  
      register: ec2  

I guess now I need to start using ansible vault to just hold my key and ID.

Upvotes: 4

Arbab Nazar
Arbab Nazar

Reputation: 23811

For those hitting this problem, you can solve it by making setting the become/sudo: False and connection: local in the playbook.

---
- hosts: localhost
  connection: local
  become: False
  tasks:
   ...
   ...

Hope this will help others.

Upvotes: 12

Related Questions