Reputation: 111
I have created two docker containers named as server and client using alpine image and I am running both containers.
Then I installed apk add openssh
and apk add openrc
in both the containers.
Using rc-service sshd start
I have started the ssh service.
Now, I want to copy a file using scp
.
From server container I typed:
scp myfile.txt [email protected]:/location_of_the_folder
It is asking a password for the client container. What can I do? What is the default password for docker container(s)?
I have tried 3 options as follows:
docker cp
from server container to host and then host to
client container.ssh-keygen
in the server container and copied the id_rsa.pub
key manually to client containers /root/.ssh
directory and it works.I don't want to use options 1 and 2. What should I do for option 3 using shell script? I want to automate this thing. I can do it manually, but can we do it by automation using a shell script, to copy some text from one container to another container?
Upvotes: 11
Views: 20685
Reputation: 1244
As mentioned above, you would use docker cp
. But if you still want to use scp (and perhaps rsync) for testing purposes here are the steps that worked for me:
1. copy ~/.ssh/authorized_keys
to your working directory
2. Create a Dockerfile in this directory, it will use the aforementioned authorized_keys
:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y openssh-server rsync
RUN apt-get clean
# Create the missing directory for privilege separation
RUN mkdir -p /run/sshd
COPY ./authorized_keys /root/.ssh/authorized_keys
# Expose the SSH server port
EXPOSE 22
# Configure SSH to allow key-based authentication and disable password authentication
RUN echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
RUN echo "PasswordAuthentication no" >> /etc/ssh/sshd_config
RUN echo "PubkeyAuthentication yes" >> /etc/ssh/sshd_config
CMD ["/usr/sbin/sshd", "-D"]
Note that since we are creating the image for testing purposes, I've disabled password and permitted Root Login.
3. Define the following docker-compose.yml
file (note that I'm using docker-compose for convenience, you can build an image without it)
version: '3'
services:
ssh-container:
build:
context: .
dockerfile: Dockerfile
container_name: ssh-container
ports:
- "2222:22" # Map host port 2222 to container port 22
4. run docker compose, e.g. you can use docker-compose up -d
to run it in a detached mode.
Now you can use scp command like so:
scp -o StrictHostKeyChecking=no -P 2222 local_file_path root@localhost:/path/on/docker/container
and rsync command like so:
rsync -Pav -e 'ssh -o StrictHostKeyChecking=no' -p 2222, local_file_path, root@localhost:/path/on/docker/container
I'm using the -o StrictHostKeyChecking=no
flag here because on every docker run its host keys will change and SSH clients will raise a REMOTE HOST IDENTIFICATION HAS CHANGED warning.
Upvotes: 1
Reputation: 3467
If it's a local docker container, use docker cp as explained here:
docker cp {container_name}:{file_path} {target_file_path OR target_dir_ended_with_slash}
But if you really need ssh
(e.g. when container runs in a remote host), try these steps:
1. Make sure you run your container with ssh port 22 redirection from host, e.g. docker run -p 8022:22 ...
Then inside the container:
2. Install sshd: sudo apt update && sudo apt install -y openssh-server
3. Create sshd directory: mkdir /var/run/sshd
4. Add password to current user ("root" user doesn't have password by default): passwd
5. Set PermitRootLogin yes
in sshd_config: sudo sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
6. You might also need: sudo sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
7. And: sudo sh -c 'echo "export VISIBLE=now" >> /etc/profile'
8. Restart sshd service: sudo service ssh restart
Then you should be able to connect with SSH, and transfer files with SCP.
If you get "port 22: Connection refused", try any of these workarounds:
docker exec --privileged -ti container_name bash
sudo apt-get install -y ufw && sudo ufw allow 22
Upvotes: 10
Reputation: 159428
Generally the Docker Linux distribution base images have all passwords disabled for all users. Also, the setup you're describing has a lot of credentials to manage (in each image, your local user's password, the remote user's password, your own ssh host keys, the remote ssh host keys) and doing this securely is tricky. (For instance, if you set a password in your Dockerfile
, then anyone who has the image can read it out of docker history
or try to crack the hashed password from the image's /etc/shadow
file.)
I would suggest avoiding ssh in Docker. It avoids the credential issues shown above, and also avoids the intrinsic difficulties in running more than one process in a container.
The easiest approach here is to add an endpoint to the "server" that can produce the file content, and have the "client" request it on startup or when it needs it.
You can also mount the same named Docker volume or host directory into both containers, and both will have read-write access to the same files, so you can transfer files this way. It's usually not a concern but there are potential race conditions (the same as if you were running both on the same system without Docker). This approach also doesn't scale well to multi-host installations like Docker Swarm or Kubernetes.
Upvotes: 1