Reputation: 34628
I want to have a server to transparently forward an incoming ssh connection from a client to a docker container. This should include scp, git transport and so forth. This must work with keys, passwords are deactivated. The user should not see the server. Update: Yes, this really means that the user shall be unaware that there is a server. The configuration must take place entirely on the server!
client -----> server -----> container (actual connection)
client -------------------> container (what the user should see)
So, what is given is this:
user@client$ ssh user@server
user@server$ ssh -p 42 user@localhost
user@container$
But what I want is this:
user@client$ ssh user@server
user@container$
I tried using the command="ssh -p 42 user@localhost"
syntax in the authorized_keys
files, which kinda works, only that in the second ssh connection the user has to enter their password as the authentication is not passed (the server
doesn't has the private key of user
).
Further this approach doesn't work with scp
even if one enters a password.
I also heard about the tunnel=
command, but I don't know how to set that up (and the manpage is less than helpful).
I am using OpenSSH 7.5p1 on Arch.
Upvotes: 3
Views: 2849
Reputation: 6187
This is going to be 2 parts:
You could bypass the SSH connection to the container altogether but still act like SSH is connecting to the container (use the username and the ~/.ssh/authorized_keys
file from the container as well as have a silent "redirect" into the container with no direct access to the server). I'm going to assume that the user exists on both the server and in the container and that the usernames match. We'll call the user ssh_username
here.
On the server, you would docker exec
a shell in the container (not SSHing into the container). Set the shell for ssh_username
on the sever to be something like:
/usr/local/bin/docker-shell
:
#!/bin/sh
/usr/bin/docker exec -it -u ${USER} --env SSH_ORIGINAL_COMMAND="${SSH_ORIGINAL_COMMAND}" container_name sh "${@}"
To create this you would do:
cat <<"EOF" | sudo tee /usr/local/bin/docker-shell > /dev/null
#!/bin/sh
/usr/bin/docker exec -it -u ${USER} --env SSH_ORIGINAL_COMMAND="${SSH_ORIGINAL_COMMAND}" container_name sh "${@}"
EOF
sudo chmod a+rx /usr/local/bin/docker-shell
Since we aren't using passwords and we don't want the users to have a shell on the server, we can create the users like this:
useradd -M -s /usr/local/bin/docker-shell -N -g nogroup ssh_username
mkdir /home/ssh_username
chown ssh_username /home/ssh_username
(using -M
to prevent default files being created in home directory, -N
to prevent matching group creation)
Now, you'll need to have that ssh_username
auth to the server, not the container, but we don't want to have to maintain the authorized_keys
files in both places. We'll use SSH's AuthorizedKeysCommand
to fetch them from the Docker container.
Here is how you would configure the AuthorizedKeysCommand
on the server. Edit the server's /etc/ssh/sshd_config
file to add:
Match User ssh_username
AuthorizedKeysCommandUser some_server_user_with_docker_access
AuthorizedKeysCommand /usr/bin/docker exec -i cat /home/ssh_username/.ssh/authorized_keys
some_server_user_with_docker_access
would be replaced with a user that is in the docker
groups on server
because this user is going to need to be able to run docker exec
on the server. This could also be the ssh_username
but then you would need to add that user to the docker
group, and you may not want to do that.
Upvotes: 0
Reputation: 34628
This is the solution I came up with now. I'm a bit unhappy with the second key, as it's public part will be visible in the container
's ~/.ssh/authorized_keys
which very slightly breaks transparency, but other than that all other things seem to work.
user@server$ cat .ssh/authorized_keys
command="ssh -q -p 42 user@localhost -- \"$SSH_ORIGINAL_COMMAND\"",no-X11-forwarding ssh-rsa <KEYSTRING_1>
user@server$ cat .ssh/id_rsa.pub
<KEYSTRING_2>
user@container$ cat .ssh/authorized_keys
ssh-rsa <KEYSTRING_2>
The client
authorises against server
with their private key. Then the server jumps to the container
with a dedicated key that is only there for that particular auth. I'm a bit worried that you can break out of command=
by injecting some commands, but so far I found no permutation that allows to break out.
Due to passing $SSH_ORIGINAL_COMMAND
, you can even do scp
and ssh-copy-id
and so forth.
Note: To disallow ssh-copy-id
, which I want for other reasons, simply make authorized_keys
non-writeable for user
inside the container.
Upvotes: 1
Reputation: 36773
Put this in your ~/.ssh/config
file:
Host server-container
ProxyCommand ssh server -W localhost:42
Then simply do:
ssh server-container
As long as your usernames are consistent. If not, you can specify them as this:
Host server-container
ProxyCommand ssh server-user@server -W localhost:42
Then simply do:
ssh container-user@server-container
Just as a bonus, you can avoid to use ssh to enter into the container using docker exec
. Like this:
ssh -t server docker exec -it <container-id> bash
Upvotes: 2