Reputation: 5644
I would like to create and provision Amazon EC2 machines with a help of Ansible. Now, I get the following error:
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Instance creation failed => InvalidKeyPair.NotFound: The key pair '~/.keys/EC2-Kibi-Enterprise-Deployment.pem' does not exist"}
But the .pem key exists:
$ ls -lh ~/.keys/EC2-Kibi-Enterprise-Deployment.pem
-r-------- 1 sergey sergey 1.7K Apr 6 09:56 /home/sergey/.keys/EC2-Kibi-Enterprise-Deployment.pem
And it was created in EU (Ireland) region.
Here is my playbook:
--
- name: Setup servers on Amazon EC2 machines
hosts: localhost
gather_facts: no
tasks:
- include_vars: group_vars/all/ec2_vars.yml
### Create Amazon EC2 instances
- name: Amazon EC2 | Create instances
ec2:
count: "{{ count }}"
key_name: "{{ key }}"
region: "{{ region }}"
zone: "{{ zone }}"
group: "{{ group }}"
instance_type: "{{ machine }}"
image: "{{ image }}"
wait: true
wait_timeout: 500
#vpc_subnet_id: "{{ subnet }}"
#assign_public_ip: yes
register: ec2
- name: Amazon EC2 | Wait for SSH to come up
wait_for:
host: "{{ item.public_ip }}"
port: 22
delay: 10
timeout: 60
state: started
with_items: "{{ ec2.instances }}"
- name: Amazon EC2 | Add hosts to the kibi_servers in-memory inventory group
add_host: hostname={{ item.public_ip }} groupname=kibi_servers
with_items: "{{ ec2.instances }}"
### END
### Provision roles
- name: Amazon EC2 | Provision new instances
hosts: kibi_servers
become: yes
roles:
- common
- java
- elasticsearch
- logstash
- nginx
- kibi
- supervisor
### END
And my var file:
count: 2
region: eu-west-1
zone: eu-west-1a
group: default
image: ami-d1ec01a6
machine: t2.medium
subnet: subnet-3a2aa952
key: ~/.keys/EC2-Kibi-Enterprise-Deployment.pem
What is wrong with the .pem file here?
Upvotes: 14
Views: 18091
Reputation: 1
While providing Key in variable don't give file extension (.pem). Just give file name. For example: akshay.pem is my key then in vars filoe just provide akshay as key.
Upvotes: 0
Reputation: 170
Do not specify extension for the key. So that key name should be " EC2-Kibi-Enterprise-Deployment " only. Ansible doesn't care if your key is on your local machine at this stage. It verifies if it exists on your AWS account. Go to 'EC2 > Key Pairs' section in your AWS account and you'll see keys are listed without file extensions.
Upvotes: 4
Reputation: 5644
The solution has been found. EC2 doesn't like when you put a full path for the .pem key file.
So, I moved EC2-Kibi-Enterprise-Deployment.pem
into ~/.ssh
, added it to the authentication agent with ssh-add
using:
ssh-add ~/.ssh/EC2-Kibi-Enterprise-Deployment.pem
And corrected the key line in my var file to key: EC2-Kibi-Enterprise-Deployment.pem
The same if you use EC2 cli tools, don't specify a full path to the key file.
ec2-run-instances ami-d1ec01a6 -t t2.medium --region eu-west-1 --key EC2-Kibi-Enterprise-Deployment.pem
Upvotes: 2
Reputation: 56997
The key
parameter for the ec2 module is looking for the key pair name that has been already uploaded to AWS, not a local key.
If you want to get Ansible to upload a public key you can use the ec2_key module.
So your playbook would look like this:
--
- name: Setup servers on Amazon EC2 machines
hosts: localhost
gather_facts: no
tasks:
- include_vars: group_vars/all/ec2_vars.yml
### Create Amazon EC2 key pair
- name: Amazon EC2 | Create Key Pair
ec2_key:
name: "{{ key_name }}"
region: "{{ region }}"
key_material: "{{ item }}"
with_file: /path/to/public_key.id_rsa.pub
### Create Amazon EC2 instances
- name: Amazon EC2 | Create instances
ec2:
count: "{{ count }}"
key_name: "{{ key_name }}"
...
Upvotes: 19