Reputation: 40588
Is there a way to ignore the SSH authenticity checking made by Ansible? For example when I've just setup a new server I have to answer yes to this question:
GATHERING FACTS ***************************************************************
The authenticity of host 'xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx)' can't be established.
RSA key fingerprint is xx:yy:zz:....
Are you sure you want to continue connecting (yes/no)?
I know that this is generally a bad idea but I'm incorporating this in a script that first creates a new virtual server at my cloud provider and then automatically calls my ansible playbook to configure it. I want to avoid any human intervention in the middle of the script execution.
Upvotes: 257
Views: 300720
Reputation: 1491
Host key checking is important security measure so I would not just skip it everywhere. Yes, it can be annoying if you keep reinstalling same testing host (without backing up it's SSH certificates) or if you have stable hosts but you run your playbook for Jenkins without simple option to add host key if you are connecting to the host for a first time. So:
This is what we are using for stable hosts (when running the playbook from Jenkins and you simply want to accept the host key when connecting to the host for the first time) in inventory file:
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=accept-new'
And this is what we have for temporary hosts whose hostname is same every time (in the end this will ignore they host key at all so really not too secure):
[all:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
There is also environment variable or you can add it into group/host variables file. No need to have it in the inventory - it was just convenient in our case.
Used some other responses here and a co-worker solution, thank you!
Upvotes: 7
Reputation: 5085
If it's your first time connecting to these hosts, and you just want to get the hostkeys for the machines, you can do it using the following command:
ansible -m ansible.builtin.shell \
-a "ssh-keyscan -H {{ ansible_host }} >> ~/.ssh/known_hosts" \
-c "local" \
-e ansible_python_interpreter=/usr/bin/python3 \
--forks=1 \
"YOUR-INVENTORY-PATTERN-GOES-HERE"
This is essentially the same as nikobelia's answer, it just omits the hassle of creating a separate playbook.
It uses ansible_host
instead of inventory_hostname
in case you do not have your local DNS set up to resolve your remote machine's hostnames.
The -e ansible_python_interpreter=...
was required on my machine. Something about /usr/bin/env python3
didn't work for my Ansible install - not sure what. Your mileage may vary.
--forks=1
is there to ensure file consistency. Without it, multiple processes will write to your ~/.ssh/known_hosts
. There may be dragons if you don't include it.
Upvotes: 0
Reputation: 783
Add this to your Ansible command to automatically accept new keys, instead of turning off key check:
--ssh-common-args='-o StrictHostKeyChecking=accept-new'
Upvotes: 1
Reputation: 1571
If you are looking for idiomatic solution and not just some bypassing workarounds, then you should pay attention to SSH Host and User Certificates. In the ideal world, when host boots up, it must contact you SSH CA and sign its SSH Host Certificate automatically. Then, you as a user before logging into particular host sign your own SSH User Certificate against the same SSH CA and you can sign into any allowed machine. With SSH Certificates you can implement certificate rotation, detailed audits, RBAC and what not.
One of the off-the-shelf solutions for SSH CA is HashiCorp Vault. But you can do it manually as well. Just set rotation periods to be bit longer. Also, it can be air gapped.
IDK why this option is not discussed there.
Ideally StrictHostKeyChecking
should be true
at all times. And you should use CertificateFile
to tell which certificate should be used for each or all hosts.
Upvotes: 0
Reputation: 1
Generate SSH Keys on the control node and copy over to clients for password less SSH connections.
Upvotes: 0
Reputation: 3214
In case if you try to solve this for git:
There is a special module GIT in Ansible
It has paramter: accept_newhostkey
Working example:
- name: Example clone of a single branch
ansible.builtin.git:
repo: [email protected]:hohoho/auparser.git
dest: /var/www/auparser
single_branch: yes
version: master
accept_newhostkey: true
Upvotes: 0
Reputation: 20296
You can simply tell SSH to automatically accept fingerprints for new hosts. Just add
StrictHostKeyChecking=accept-new
to your ~/.ssh/config
. It does not disable host-key checking entirely, it merely disables this annoying question whether you want to add a new fingerprint to your list of known hosts. In case the fingerprint for a known machine changes, you will still get the error.
This policy also works with ANSIBLE_HOST_KEY_CHECKING
and other ways of passing this param to SSH.
Upvotes: 14
Reputation: 452
Changing host_key_checking
to false
for all hosts is a very bad idea.
The only time you want to ignore it, is on "first contact", which this playbook will accomplish:
---
- name: Bootstrap playbook
# Don't gather facts automatically because that will trigger
# a connection, which needs to check the remote host key
gather_facts: false
tasks:
- name: Check known_hosts for {{ inventory_hostname }}
local_action: shell ssh-keygen -F {{ inventory_hostname }}
register: has_entry_in_known_hosts_file
changed_when: false
ignore_errors: true
- name: Ignore host key for {{ inventory_hostname }} on first run
when: has_entry_in_known_hosts_file.rc == 1
set_fact:
ansible_ssh_common_args: "-o StrictHostKeyChecking=no"
# Now that we have resolved the issue with the host key
# we can "gather facts" without issue
- name: Delayed gathering of facts
setup:
So we only turn off host key checking if we don't have the host key in our known_hosts
file.
Upvotes: 24
Reputation: 580
This one is the working one I used in my environment. I use the idea from this ticket https://github.com/mitogen-hq/mitogen/issues/753
- name: Example play
gather_facts: no
hosts: all
tasks:
- name: Check SSH known_hosts for {{ inventory_hostname }}
local_action: shell ssh-keygen -l -F {{ inventory_hostname }}
register: checkForKnownHostsEntry
failed_when: false
changed_when: false
ignore_errors: yes
- name: Add {{ inventory_hostname }} to SSH known hosts automatically
when: checkForKnownHostsEntry.rc == 1
changed_when: checkForKnownHostsEntry.rc == 1
local_action:
module: shell
args: ssh-keyscan -H "{{ inventory_hostname }}" >> $HOME/.ssh/known_hosts
Upvotes: 1
Reputation: 4887
Two options - the first, as you said in your own answer, is setting the environment variable ANSIBLE_HOST_KEY_CHECKING
to False.
The second way to set it is to put it in an ansible.cfg file, and that's a really useful option because you can either set that globally (at system or user level, in /etc/ansible/ansible.cfg
or ~/.ansible.cfg
), or in an config file in the same directory as the playbook you are running.
To do that, make an ansible.cfg
file in one of those locations, and include this:
[defaults]
host_key_checking = False
You can also set a lot of other handy defaults there, like whether or not to gather facts at the start of a play, whether to merge hashes declared in multiple places or replace one with another, and so on. There's a whole big list of options here in the Ansible docs.
Edit: a note on security.
SSH host key validation is a meaningful security layer for persistent hosts - if you are connecting to the same machine many times, it's valuable to accept the host key locally.
For longer-lived EC2 instances, it would make sense to accept the host key with a task run only once on initial creation of the instance:
- name: Write the new ec2 instance host key to known hosts
connection: local
shell: "ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts"
There's no security value for checking host keys on instances that you stand up dynamically and remove right after playbook execution, but there is security value in checking host keys for persistent machines. So you should manage host key checking differently per logical environment.
~/.ansible.cfg
)./ansible.cfg
alongside the playbook for unit tests against vagrant VMs, automation for short-lived ec2 instances)Upvotes: 363
Reputation: 15329
Ignoring checking is a bad idea as it makes you susceptible to Man-in-the-middle attacks.
I took the freedom to improve nikobelia's answer by only adding each machine's key once and actually setting ok/changed status in Ansible:
- name: Accept EC2 SSH host keys
connection: local
become: false
shell: |
ssh-keygen -F {{ inventory_hostname }} ||
ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
register: known_hosts_script
changed_when: "'found' not in known_hosts_script.stdout"
However, Ansible starts gathering facts before the script runs, which requires an SSH connection, so we have to either disable this task or manually move it to later:
- name: Example play
hosts: all
gather_facts: no # gather facts AFTER the host key has been accepted instead
tasks:
# https://stackoverflow.com/questions/32297456/
- name: Accept EC2 SSH host keys
connection: local
become: false
shell: |
ssh-keygen -F {{ inventory_hostname }} ||
ssh-keyscan -H {{ inventory_hostname }} >> ~/.ssh/known_hosts
register: known_hosts_script
changed_when: "'found' not in known_hosts_script.stdout"
- name: Gathering Facts
setup:
One kink I haven't been able to work out is that it marks all as changed even if it only adds a single key. If anyone could contribute a fix that would be great!
Upvotes: 7
Reputation: 878
The most problems appear when you want to add new host to dynamic inventory (via add_host module) in playbook. I don't want to disable fingerprint host checking permanently so solutions like disabling it in a global config file are not ok for me. Exporting var like ANSIBLE_HOST_KEY_CHECKING
before running playbook is another thing to do before running that need to be remembered.
It's better to add local config file in the same dir where playbook is. Create file named ansible.cfg
and paste following text:
[defaults]
host_key_checking = False
No need to remember to add something in env vars or add to ansible-playbook
options. It's easy to put this file to ansible git repo.
Upvotes: 0
Reputation: 7424
If you don't want to modify ansible.cfg
or the playbook.yml
then you can just set an environment variable:
export ANSIBLE_HOST_KEY_CHECKING=False
Upvotes: 9
Reputation: 344
I know the question has been answered and it's correct as well, but just wanted to link the ansible doc where it's explained clearly when and why respective check should be added: host-key-checking
Upvotes: 1
Reputation: 167
Use the parameter named as validate_certs to ignore the ssh validation
- ec2_ami:
instance_id: i-0661fa8b45a7531a7
wait: yes
name: ansible
validate_certs: false
tags:
Name: ansible
Service: TestService
By doing this it ignores the ssh validation process
Upvotes: -1
Reputation: 129
You can pass it as command line argument while running the playbook:
ansible-playbook play.yml --ssh-common-args='-o StrictHostKeyChecking=no'
Upvotes: 12
Reputation: 4769
forward to nikobelia
For those who using jenkins to run the play book, I just added to my jenkins job before running the ansible-playbook the he environment variable ANSIBLE_HOST_KEY_CHECKING = False For instance this:
export ANSIBLE_HOST_KEY_CHECKING=False
ansible-playbook 'playbook.yml' \
--extra-vars="some vars..." \
--tags="tags_name..." -vv
Upvotes: 8
Reputation: 40588
I found the answer, you need to set the environment variable ANSIBLE_HOST_KEY_CHECKING
to False
. For example:
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook ...
Upvotes: 60