Reputation: 348
I am trying to use ansible (2.2.1.0) in order to run a healthcheck playbook I wrote with a couple of hosts situated behind two bastion hosts.
I have two environments, dev and prod, and two SSH keys for each environment (which are different). Each environment has a bastion host that you need to SSH into first in order to reach any other hosts in that environment. The problem is that it seems that ansible is using the correct SSH keys for the bastion hosts, but it seems to revert to ~/.ssh/id_rsa for any of the hosts behind the bastion boxes.
My hosts
inventory:
[jumpbox-dev]
DEV-BASTION ansible_ssh_host=XX.XX.XX.XX
[dev]
WEB1 ansible_ssh_host=10.0.0.1
WEB2 ansible_ssh_host=10.0.0.2
[jumpbox-prod]
PROD-BASTION ansible_ssh_host=YY.YY.YY.YY
[prod]
WEB3 ansible_ssh_host=10.0.0.1
WEB4 ansible_ssh_host=10.0.0.2
under group_vars
I have the files:
group_vars
- jumpbox-dev.yml
- dev.yml
- jumpbox-prod.yml
- prod.yml
My healthcheck.yml is
---
- name: Ping all hosts
become: True
hosts:
- jumpbox-dev
- dev
gather_facts: yes
tasks:
- name: Ping
ping:
jumpbox-dev.yml
contains ansible_ssh_private_key_file: /home/myUser/.ssh/id_rsa_dev
, and dev.yml
contains: ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"'
Specifying the ansible_ssh_private_key_file
in dev.yml seems to be ignored, but all the requests succeed if I copy id_rsa_dev
into id_rsa
in my /home/myUser/.ssh
folder. Adding -i /home/myUser/.ssh/id_rsa_dev
to the proxycommand doesn't seem to help either.
Is there some config I am missing? Could it be related to my directory structure (going through github issues suggests that they might be related)?
Cheers!
Upvotes: 1
Views: 759
Reputation: 68269
Never do environment isolation using groups in Ansible – use different inventories! See this answer. In your case variables from dev.yml
are overwritten by vars from prod.yml
because WEB1 and WEB2 are in both groups.
Upvotes: 1