Nicolas Buch
Nicolas Buch

Reputation: 511

How to execute command from one docker container to another

I'm creating an application that will allow users to upload video files that will then be put through some processing.

I have two containers.

  1. Nginx container that serves the website where users can upload their video files.
  2. Video processing container that has FFmpeg and some other processing stuff installed.

What I want to achieve. I need container 1 to be able to run a bash script on container 2.

One possibility as far as I can see is to make them communicate over HTTP via an API. But then I would need to install a web server in container 2 and write an API which seems a bit overkill. I just want to execute a bash script.

Any suggestions?

Upvotes: 32

Views: 37941

Answers (7)

reinierpost
reinierpost

Reputation: 8591

Let's assume Linux and say container A needs to tell container B to execute the command foo.sh.

A safe approach would be to create a shared resource that A will update and B will watch.

You can use a file:

  • share a directory, say /run/foo, as a shared volume
  • in A, create a file whenever the command needs to run, e.g. touch /run/foo/please-execute
  • in B, watch for it using something like while sleep 60; do if [ -e /run/foo/please-execute ]; then foo.sh && rm /run/foo/please-execute; fi; done &

If B has the inotify utilities, you can use them to watch for the file, eliminating the polling delay.

ALternatively, you can use a named pipe:

  • create it (mkfifo) and use it as a volume in A and B
  • A writes a line to it: e.g., echo >> /run/foo/please-execute
  • B uses something like while read something; do foo.sh; done < /run/foo/please-execute &

Alternatively, add a container C with access to the Docker socket and have it monitor the file/pipe and execute the command in container B. That way, you don't need to modify container B.

Upvotes: 0

Martin
Martin

Reputation: 2942

You could write a very basic API using Ncat and GNU sed’s e command.

If needed, install nmap-ncat and GNU sed, then run something like this in the container you want to control:

ncat -lkp 9000 | sed \
  -e '/^cmd1$/e /opt/foo.sh' \
  -e '/^stop$/e kill -s INT 1'

The entrypoint script would look like this:

ncat -lkp 9000 | sed \
  -e '/^cmd1$/e /opt/foo.sh' \
  -e '/^stop$/e kill -s INT 1' &
  exec /opt/some/daemon

exec is required to run the daemon with process ID 1, which is needed to stop it gracefully.

And to send commands to this container, use something like

echo stop|nc containername 9000

Note: you can use nc or ncat for sending commands, but on the receiving side nc from Busybox does not keep listening for new requests without using -e which would need a different approach.

When also using a restart policy with Docker Compose, this could be used to restart containers (for example, to reload configuration or certificates) without having to give the controlling container access to the Docker socket (/var/run/docker.sock), which is insecure.

Upvotes: 1

Z4-tier
Z4-tier

Reputation: 7978

You have a few options, but the first 2 that come to mind are:

  1. In container 1, install the Docker CLI and bind mount /var/run/docker.sock (you need to specify the bind mount from the host when you start the container). Then, inside the container, you should be able to use docker commands against the bind mounted socket as if you were executing them from the host (you might also need to chmod the socket inside the container to allow a non-root user to do this.
  2. You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don't need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.

Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.

Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.

Warning:

To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It's a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.

Upvotes: 21

eshaan7
eshaan7

Reputation: 1058

I wrote a python package especially for this use-case.

Flask-Shell2HTTP is a Flask-extension to convert a command line tool into a RESTful API with mere 5 lines of code.

Example Code:

from flask import Flask
from flask_executor import Executor
from flask_shell2http import Shell2HTTP

app = Flask(__name__)
executor = Executor(app)
shell2http = Shell2HTTP(app=app, executor=executor, base_url_prefix="/commands/")

shell2http.register_command(endpoint="saythis", command_name="echo")
shell2http.register_command(endpoint="run", command_name="./myscript")

can be called easily like,

$ curl -X POST -H 'Content-Type: application/json' -d '{"args": ["Hello", "World!"]}' http://localhost:4000/commands/saythis

You can use this to create RESTful micro-services that can execute pre-defined shell commands/scripts with dynamic arguments asynchronously and fetch result.

It supports file upload, callback fn, reactive programming and more. I recommend you to checkout the Examples.

Upvotes: 16

RexBarker
RexBarker

Reputation: 1771

It was mentioned here before, but a reasonable, semi-hacky option is to install SSH in both containers and then use ssh to execute commands on the other container:

# install SSH, if you don't have it already
sudo apt install openssh-server

# start the ssh service
sudo service start ssh

# start the daemon
sudo /usr/sbin/sshd -D &

Assuming you don't want to always be root, you can add default user (in this case, 'foobob'):

useradd -m --no-log-init --system  --uid 1000 foobob -s /bin/bash -g sudo -G root

#change password
echo 'foobob:foobob' | chpasswd

Do this on both the source and target containers. Now you can execute a command from container_1 to container_2.

# obtain container-id of target container using 'docker ps'
ssh foobob@<container-id> << "EOL"
echo 'hello bob from container 1' > message.txt
EOL

You can automate the password with ssh-agent, or you can use some bit of more hacky with sshpass (install it first using sudo apt install sshpass):

sshpass -p 'foobob' ssh foobob@<container-id>

Upvotes: 5

Marc ABOUCHACRA
Marc ABOUCHACRA

Reputation: 3463

Running a docker command from a container is not straightforward and not really a good idea (in my opinion), because :

  1. You'll need to install docker on the container (and do docker in docker stuff)
  2. You'll need to share the unix socket, which is not a good thing if you have no idea of what you're doing.

So, this leaves us two solutions :

  1. Install ssh on you're container and execute the command through ssh
  2. Share a volume and have a process that watch for something to trigger your batch

Upvotes: 13

Mornor
Mornor

Reputation: 3783

I believe

docker exec -it <container_name> <command>

should work, even inside the container.

You could also try to mount to docker.sock in the container you try to execute the command from:

docker run -v /var/run/docker.sock:/var/run/docker.sock ...

Upvotes: -4

Related Questions