Reputation: 10963
I am usign Ansible and Vagrant. But, I am running manually my playbooks. Ansible always fails with the first running:
ansible-playbook -i cluster_hosts site.yml --tags setup_db --limit slave1
The report:
PLAY [database] ***************************************************************
GATHERING FACTS ***************************************************************
fatal: [slave1] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
TASK: [postgresql | Copy source list] *****************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/home/robe/site.retry
slave1 : ok=0 changed=0 unreachable=1 failed=0
I am running again and it past. Why ansible fails with the first?
UPDATE
Running with -vvvv option
PLAY [database] ***************************************************************
GATHERING FACTS ***************************************************************
<192.168.1.13> ESTABLISH CONNECTION FOR USER: vagrant
<192.168.1.13> REMOTE_MODULE setup
<192.168.1.13> EXEC ['sshpass', '-d7', 'ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/robe/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'GSSAPIAuthentication=no', '-o', 'PubkeyAuthentication=no', '-o', 'User=vagrant', '-o', 'ConnectTimeout=10', '192.168.1.13', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1411394566.34-255722526667010 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1411394566.34-255722526667010 && echo $HOME/.ansible/tmp/ansible-tmp-1411394566.34-255722526667010'"]
fatal: [slave1] => SSH encountered an unknown error. The output was:
OpenSSH_6.2p2 Ubuntu-6ubuntu0.5, OpenSSL 1.0.1e 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/home/robe/.ansible/cp/ansible-ssh-192.168.1.13-22-vagrant" does not exist
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.1.13 [192.168.1.13] port 22.
debug2: fd 3 setting O_NONBLOCK
debug1: connect to address 192.168.1.13 port 22: Connection timed out
ssh: connect to host 192.168.1.13 port 22: Connection timed out
TASK: [postgresql | Copy source list] *****************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/home/robe/site.retry
slave1 : ok=0 changed=0 unreachable=1 failed=0
Upvotes: 1
Views: 1763
Reputation: 121
Do you use a private network with a custom inventory file? It could be that from time to time the virtualbox networking is not fully operational yet on the additional adapter. (vagrant uses the default one and thinks everything is online)
A simple workaround is to increase the timeout:
ansible.raw_arguments = ['--timeout=300']
See here: https://github.com/mitchellh/vagrant/issues/4860
Upvotes: 2
Reputation: 12781
The machine that you're trying to provision with ansible either isn't accepting/listening for SSH, or you have a networking problem. This is the key part of the output:
ssh: connect to host 192.168.1.13 port 22: Connection timed out
Is your target box running sshd?
Upvotes: 0