Reputation: 11081
I have the following instance creation task in Ansible:
- name: Provisioning Spot instaces
ec2:
assign_public_ip: no
spot_price: "{{ ondemand4_price }}"
spot_wait_timeout: 300
assign_public_ip: no
aws_access_key: "{{ assumed_role.sts_creds.access_key }}"
aws_secret_key: "{{ assumed_role.sts_creds.secret_key }}"
security_token: "{{ assumed_role.sts_creds.session_token }}"
region: "{{ aws_region }}"
image: "{{ image_instance }}"
instance_type: "{{ large_instance }}"
key_name: "{{ ssh_keyname }}"
count: "{{ ninstances }}"
state: present
group_id: "{{ cypher_priv_sg }}"
vpc_subnet_id: "{{ private_subnet_id }}"
instance_profile_name: 'Cypher-Ansible'
wait: true
instance_tags:
Name: Cypher-Worker
#delete_on_termination: yes
register: ec2
ignore_errors: True
And then the termination task is:
- name: Terminate instances that were previously launched
connection: local
become: false
ec2:
state: 'absent'
instance_ids: '{{ ec2.instance_ids }}'
region: '{{ aws_region }}'
register: TerminateWorker
ignore_errors: True
But, instead of terminating my Worker instances, it throws an error which says:
TASK [Terminate instances that were previously launched] ***********************
task path: /path/to/file/Ansible/provision.yml:373
fatal: [x.y.a.202]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'ec2' is undefined\n\nThe error appears to have been in '/path/to/file/Ansible/provision.yml': line 373, column 7, but maybe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Terminate instances that were previously launched\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'ec2' is undefined"
}
What might be the issue here?
Upvotes: 1
Views: 971
Reputation: 369
Your task looks fine on first look. But why do you use the "connection" and "become" flags on termination? Just asking because you don't use them in the provisioning task.
EDIT2: Are your provisioning- and termination-tasks in the same play? If yes you can access the registered "ec2" variable like this:
- name: Terminate instances that were previously launched
ec2:
state: 'absent'
instance_ids: '{{ item.instance_id }}'
region: "{{ aws_region }}"
wait: yes
wait_timeout: 500
with_items: "{{ ec2.instances }}"
If your termination task is in another play of the same playbook run you have to use the set_fact task to make it accessible for other plays.
If your termination task will be executed in an entirely different playbook run you can find out your instance ids with ec2_instance_facts like this:
- name: get ec2 instance id by its name tag
ec2_instance_facts:
filters:
"tag:ec2_instance_name": "{{ ecs_instance_name }}"
instance-state-name: running
register: instances
With this method you have to set the above mentioned tag via the provisioning task.
Upvotes: 3
Reputation: 235
You need to specify the ec2 var to ise in that task.
You can add:
with_items: {{ ec2 }}
At the end of your termination task and it'll pick this up from the registrated var in the above task.
Upvotes: 0