Reputation: 1731
I need to copy a file between two remote nodes:
Please note that my control node, from where I run all my Ansible tasks, is none of the above mentioned nodes.
I have tried the following:
Use scp
command in shell module of Ansible
- hosts: machine2
user: user2
tasks:
- name: Copy file from machine1 to machine2
shell: scp user1@machine1:/path-of-file/file1 /home/user2/file1
This approach just goes on and on never ends.
Use the fetch
and copy
modules
- hosts: machine1
user: user1
tasks:
- name: copy file from machine1 to local
fetch:
src: /path-of-file/file1
dest: /path-of-file/file1
- hosts: machine2
user: user2
tasks:
- name: copy file from local to machine2
copy:
src: /path-of-file/file1
dest: /path-of-file/file1
This approach throws me an error as follows:
error while accessing the file /Users//.ansible/cp/ansible-ssh-machine2-22-, error was: [Errno 102] Operation not supported on socket: u'/Users//.ansible/cp/ansible-ssh-machine2-22-'
How can I achieve this?
Upvotes: 129
Views: 228114
Reputation: 783
You can store the file in a variable and then write the variables contents again into a file on the destination. This can be done with slurp and copy.
It has the advantage that you do not need a ssh connection between the two remotes MachineA and MachineB, but uses the connections to the Ansible controller of each remote machine. Hovever, it does perform bad for large files.
- name: Read File into variable on MachineA
ansible.builtin.slurp:
src: "/etc/foo/file1.txt"
register: file1_slurp
- name: Write File from variable on MachineB
delegate_to: "MachineB"
ansible.builtin.copy:
dest: "/etc/bar/{{ file1_slurp.source | basename }}"
content: "{{ file1_slurp.content | b64decode }}"
owner: root
group: root
mode: u=rw,g=r,o=r
The {{ file1_slurp.source | basename }}
just means that we use the same filename on MachineB, you could also use file1.txt
directly. However, that would be useful if you would copy multiple files by an with_items
statement.
Upvotes: 0
Reputation: 4780
To copy remote-to-remote files you can use the synchronize
module with delegate_to: source-server
keyword:
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote (from serverA to serverB)
synchronize: src=/copy/from_serverA dest=/copy/to_serverB
delegate_to: serverA
This playbook can run from your machineC
.
Upvotes: 114
Reputation: 10375
As @ant31 already pointed out, you can use the synchronize
module for this. By default, the module transfers files between the control machine and the current remote host (inventory_host
), however that can be changed using the delegate_to
parameter of a task (it's important to note that this is a parameter of the task, not of the module).
You can place the task on either ServerA
or ServerB
, but you have to adjust the direction of the transfer accordingly (using the mode
parameter of synchronize
).
Placing the task on ServerB
- hosts: ServerB
tasks:
- name: Transfer file from ServerA to ServerB
synchronize:
src: /path/on/server_a
dest: /path/on/server_b
delegate_to: ServerA
This uses the default mode: push
, so the file gets transferred from the delegate (ServerA
) to the current remote (ServerB
).
This might look strange, since the task has been placed on ServerB
(via hosts: ServerB
). However, one has to keep in mind that the task is actually executed on the delegated host, which, in this case, is ServerA
. So pushing (from ServerA
to ServerB
) is indeed the correct direction. Also remember that we cannot simply choose not to delegate at all, since that would mean that the transfer happens between the control node and ServerB
.
Placing the task on ServerA
- hosts: ServerA
tasks:
- name: Transfer file from ServerA to ServerB
synchronize:
src: /path/on/server_a
dest: /path/on/server_b
mode: pull
delegate_to: ServerB
This uses mode: pull
to invert the transfer direction. Again, keep in mind that the task is actually executed on ServerB
, so pulling is the right choice.
Upvotes: 140
Reputation: 2163
When transferring system secrets from machine1 to machine2 you may not have direct access between them so solutions involving delegate_to
will fail. This happens when you keep your private key on your ansible control node and your public key in ~/.ssh/authorized_keys
on ansible user accounts for machine1 and machine2. Instead you can pipe a file or directory from one machine to the other via ssh, using password-free sudo for remote privilege escalation:
-name "Copy /etc/secrets directory from machine1 and machine2"
delegate_to: localhost
shell: |
ssh machine1 sudo tar -C / -cpf - etc/secrets | ssh machine2 sudo tar -C / -xpf -
For example, to set up an ad hoc computing cluster using the MUNGE daemon for authentication you can use the following to copy the credentials from the head node to the workers.
setup_worker.yml
- name: "Install worker packages"
apt:
name: "{{ packages }}"
vars:
packages:
- munge
# ...and other worker packages...
become: true
- name: "Copy MUNGE credentials from head to {{ host }}"
delegate_to: localhost
shell:
ssh head.domain.name sudo tar -C / -cpf - etc/munge | ssh {{ ansible_facts["nodename"] }} sudo tar -C / -xpf -
become: false
when: ansible_facts["nodename"] != "head.domain.name"
- name: "Restart MUNGE"
shell: |
systemctl restart munge
become: true
This is run as ansible-playbook -e host=machine1 setup_worker.yml
.
Since the ansible user is unprivileged the remote system tasks need become: true
. The copy task does not need privilege escalation since the controller is only used to set up the pipeline; the privilege escalation happens via the sudo in the ssh command. The variable host
contains the host pattern used on the command line, not the worker being initialized. Instead use ansible_facts["nodename"]
, which will be the fully-qualified domain name for the current worker (assuming the worker is properly configured). The when
clause prevents us from trying to copy the directory from the head node onto itself.
Upvotes: 0
Reputation: 8687
In 2021 you should install wrapper:
ansible-galaxy collection install ansible.posix
And use
- name: Synchronize two directories on one remote host.
ansible.posix.synchronize:
src: /first/absolute/path
dest: /second/absolute/path
delegate_to: "{{ inventory_hostname }}"
Read more:
https://docs.ansible.com/ansible/latest/collections/ansible/posix/synchronize_module.html
Checked on:
ansible --version
ansible 2.10.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/daniel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
executable location = /sbin/ansible
python version = 3.9.1 (default, Dec 13 2020, 11:55:53) [GCC 10.2.0]
Upvotes: 0
Reputation: 19958
You can use deletgate
with scp
too:
- name: Copy file to another server
become: true
shell: "scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null admin@{{ inventory_hostname }}:/tmp/file.yml /tmp/file.yml"
delegate_to: other.example.com
Because of delegate
the command is run on the other server and it scp
's the file to itself.
Upvotes: 3
Reputation: 758
If you want to do rsync and use custom user and custom ssh key, you need to write this key in rsync options.
---
- name: rsync
hosts: serverA,serverB,serverC,serverD,serverE,serverF
gather_facts: no
vars:
ansible_user: oracle
ansible_ssh_private_key_file: ./mykey
src_file: "/path/to/file.txt"
tasks:
- name: Copy Remote-To-Remote from serverA to server{B..F}
synchronize:
src: "{{ src_file }}"
dest: "{{ src_file }}"
rsync_opts:
- "-e ssh -i /remote/path/to/mykey"
delegate_to: serverA
Upvotes: 3
Reputation: 23
A simple way to used copy module to transfer the file from one server to another
Here is playbook
---
- hosts: machine1 {from here file will be transferred to another remote machine}
tasks:
- name: transfer data from machine1 to machine2
copy:
src=/path/of/machine1
dest=/path/of/machine2
delegate_to: machine2 {file/data receiver machine}
Upvotes: 1
Reputation: 141
If you need to sync files between two remote nodes via ansible you can use this:
- name: synchronize between nodes
environment:
RSYNC_PASSWORD: "{{ input_user_password_if_needed }}"
synchronize:
src: rsync://user@remote_server:/module/
dest: /destination/directory/
// if needed
rsync_opts:
- "--include=what_needed"
- "--exclude=**/**"
mode: pull
delegate_to: "{{ inventory_hostname }}"
when on remote_server
you need to startup rsync with daemon mode. Simple example:
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log
port = port
[module]
path = /path/to/needed/directory/
uid = nobody
gid = nobody
read only = yes
list = yes
auth users = user
secrets file = /path/to/secret/file
Upvotes: 5
Reputation: 1731
I was able to solve this using local_action to scp to file from machineA to machineC and then copying the file to machineB.
Upvotes: 3