Nvasion
Nvasion

Reputation: 620

Ansible failed to transfer file to /command

Recently I have been using ansible for a wide variety of automation. However, during testing for automatic tomcat6 restart on specific webserver boxes. I came across this new error that I can't seem to fix.

FAILED => failed to transfer file to /command

Looking at documentation said its because of sftp-server not being in the sshd_config, however it is there.

Below is the command I am running to my webserver hosts.

ansible all -a "/usr/bin/sudo /etc/init.d/tomcat6 restart" -u user --ask-pass --sudo --ask-sudo-pass

There is a .ansible hidden folder on each of the boxes so I know its making to them but its not executing the command.

Running -vvvv gives me this after:

EXEC ['sshpass', '-d10', 'ssh', '-C', '-tt', '-vvv', '-o', 'ControlMaster=auto', '-o',    'ControlPersist=60s', '-o', 'ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'GSSAPIAuthentication=no', '-o', 'PubkeyAuthentication=no', '-o', 'User=user', '-o', 'ConnectTimeout=10', '10.10.10.103', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689 && echo $HOME/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689'"]

then

10.10.10.103 | FAILED => failed to transfer file to /home/user/.ansible/tmp/ansible-tmp-1400791384.19-262170576359689/command

Any help on this issue is much appreciated.

Thanks,


Edit:

To increase Googleability, here is another manifestation of the error that the chosen answer fixes.

Running the command ansible-playbook -i inventory hello_world.yml gives this warning for every host.

[WARNING]: sftp transfer mechanism failed on [host.example.com]. Use ANSIBLE_DEBUG=1 to see detailed information

And when you rerun the command as ANSIBLE_DEBUG=1 ansible-playbook -i inventory hello_world.yml the only extra information you get is:

>>>sftp> put /var/folders/nc/htqkfk6j6h70hlxrr43rm4h00000gn/T/tmpxEWCe5 /home/ubuntu/.ansible/tmp/ansible-tmp-1487430536.22-28138635532013/command.py

Upvotes: 33

Views: 55981

Answers (7)

d__ecay
d__ecay

Reputation: 51

Although it is true that this used to work back in '14 when @stibi solved the problem initially:

[ssh_connection]
scp_if_ssh=True

It is now superseded by the following line in ansible.cfg, which was referred to above as an inline hosts option by @Serge Stroobandt:

[ssh_connection]
transfer_method = scp

where possible values for scp are:

sftp  = use sftp to transfer files
scp   = use scp to transfer files
piped = use 'dd' over SSH to transfer files
smart = try sftp, scp, and piped, in that order [default]

When you specify scp instead of using the default, you avoid the transfer mechanism warning about sftp.

Upvotes: 5

HarlemSquirrel
HarlemSquirrel

Reputation: 10194

In CentOS8 I had to replace this line in /etc/ssh/sshd_config

Subsystem sftp /usr/libexec/openssh/sftp-server

with

Subsystem sftp internal-sftp

Then restart sshd service with this command:

systemctl restart sshd

Upvotes: 2

Serge Stroobandt
Serge Stroobandt

Reputation: 31658

Without touching /etc/ansible/ansible.cfg

If only one host is affected, then this can be remedied on a per host basis in the hosts file as follows:

alias ansible_host=192.168.1.102 ansible_ssh_transfer_method=scp

This solution requires ansible version 2.3 or higher.

[Source]

Upvotes: 8

Nisk
Nisk

Reputation: 1124

You can try this solution:

rm -rf ~/.ansible

And then

ansible-galaxy install cbrunnkvist.ansistrano-symfony-deploy --force

After try again

ansible-playbook -i  etc/deploy/config/inventory.yml etc/deploy/deploy.yml

Upvotes: -3

slm
slm

Reputation: 16456

I recently received a message like this for an entirely different reason. I had some stray text that was the result of a cd - command that I had in my ~/.bashrc file. I fixed this issue by filtering its output like this:

my ~/.bashrc

...
cd ~/ansible/hacking/ > /dev/null 2>&1 && . env-setup -q && cd - > /dev/null 2>&1
...

Without those redirects of the cd commands to /dev/null I was getting this message.

TASK [setup] *******************************************************************
ok: [app02]
ok: [app03]
fatal: [app01]: FAILED! => {"failed": true, "msg": "failed to transfer file to /home/admin/.ansible/tmp/ansible-tmp-1474747432.93-129438354708729/setup:\n\n/home/admin\n"}

my ansible.cfg

The other interesting details from my situation are that I'm already using this in my ansible.cfg file:

[ssh_connection]
scp_if_ssh=True

And the server in the list with the issue, app01, is the same server where I'm running the Ansible playbook from.

The bit of text at the end of my error message:

74747432.93-129438354708729/setup:\n\n/home/admin\n"}

is what clued me into my issue. That's the output from cd ... when it runs during login when my ~/.bashrc file is being processed.

Upvotes: 2

Vijender Marthi
Vijender Marthi

Reputation: 1

This solution will work:

Step 1:

In the host file (/etc/ansible/hosts) use the ipaddress as "[email protected]" instead of "192.168.1.102".

Step 2:

Uncomment the property in "/etc/ansible/ansible.cfg" file.

scp_if_ssh=True

Upvotes: 0

stibi
stibi

Reputation: 1039

do you have sftp subsystem enabled in sshd on the remote server? You can check it in /etc/sshd/sshd_config, the config file name depends on your distribution…anyway, look there for:

Subsystem      sftp    /usr/lib/ssh/sftp-server

If this line is commented-out, the sftp is disabled. To fix it, you can either enable sftp, or change Ansible configuration. I prefer the Ansible configuration change, take a look at ansible.cfg and add/change:

[ssh_connection]
scp_if_ssh=True

Upvotes: 45

Related Questions