SteveM
SteveM

Reputation: 75

Inject SSH key into a Docker container

I am trying to find a "global" solution for injecting an SSH key into a container. I know that there are several solutions including docker build kit and so on...but I don't want to build an image and inject the SSH key. I want to inject the SSH key by using an existing image with docker compose.

I use the following docker compose file:

version: '3.1'

services:
  server1:
    image: XXXXXXX
    container_name: server1
    command: bash -c "/root/init.sh && python3 /root/my_python.py"
    environment:
      - MANAGED_HOST=mserver
    volumes:
      - ./init.sh:/root/init.sh
    secrets:
      - id_rsa

secrets:
   id_rsa:
     file: /home/user/.ssh/id_rsa

The init.sh is as follows:

#!/bin/bash

eval "$(ssh-agent -s)" > /dev/null
if [ ! -d "/root/.ssh/" ]; then
    mkdir /root/.ssh
    ssh-keyscan $MANAGED_HOST > /root/.ssh/known_hosts
fi
ssh-add -k /run/secrets/id_rsa

If I run docker compose with the parameter command bash -c "/root/init.sh && python3 /root/my_python.py", then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is not working.

An agent process is running:

root         8     1  0 12:50 ?        00:00:00 ssh-agent -s

known_hosts is OK:

root@c67655d87ced:~# cat /root/.ssh/known_hosts
BLABLABLA ssh-rsa AAAAB3BLABLABLA....

and the agent is running, but the private key is not added:

root@c67655d87ced:~# ssh-add -l
Could not open a connection to your authentication agent.

Now, if I log in the container (docker exec -it server1 /bin/bash) and run the commands from init.sh one by one from the command line, then the SSH authentication to the appropriate remote host ($MANAGED_HOST) is working?!?

Any idea, how I can get it working by using the docker compose?

Upvotes: 1

Views: 1350

Answers (1)

David Maze
David Maze

Reputation: 160013

It should be enough to cause the file $HOME/.ssh/id_rsa to exist with appropriate permissions; you don't need an ssh agent running.

#!/bin/sh
if ! [ -d "$HOME/.ssh" ]; then
  mkdir "$HOME/.ssh"
fi
chmod 0700 "$HOME/.ssh"
if [ -n "$MANAGED_HOST" ]; then
  ssh-keyscan "$MANAGED_HOST" >> "$HOME/.ssh/known_hosts"
fi
if [ -f /run/secrets/id_rsa ]; then
  cp /run/secrets/id_rsa "$HOME/.ssh/id_rsa"
  chmod 0400 "$HOME/.ssh/id_rsa"
fi
# exec "$@"

A typical pattern is to use the Dockerfile ENTRYPOINT to do first-time setup tasks like this. That will get passed the CMD as arguments, and the commented exec "$@" line at the end of the file runs that as a command. You'd set this up in your image's Dockerfile like:

FROM XXXXXX
...
# Script must be executable on the host, and must start with a
# #!/bin/sh "shebang" line
COPY init.sh /root
# MUST use JSON-array form
ENTRYPOINT ["/root/init.sh"]
# Can use any Dockerfile syntax
CMD ["python3", "/root/my_python.py"]

In your specific example, you're launching init.sh as a subprocess. The ssh-agent setup sets some environment variables, like $SSH_AUTH_SOCK, but when these run as a subprocess they don't get propagated back out to the host process. You can use the standard POSIX shell . builtin (the bash source builtin is equivalent, but non-standard) to cause those environment variables to be set in the context of the parent shell:

command: sh -c ". /root/init.sh && exec python3 /root/my_python.py"

The exec replaces the shell wrapper with the Python script, which you generally want. This will also wind up being the parent process of ssh-agent, which could potentially surprise your process if it happens to exit.

Upvotes: 1

Related Questions