ph34r
ph34r

Reputation: 243

Executing Code from one Container to Another (i.e. execute script on worker container from an API container )

I have a docker-compose consisting of four containers, all of which perform a single function:

An nginx proxy that forwards UI and API requests to the corresponding containers (node container, flask container), as depicted in the image below.

There is also a separate container which executes long running python scripts and works independent of the other containers. I'd now like to create the ability to execute scripts in the "long running scripts" (LRS) container via the API:

example

What is the best way to do this?

I've seen a few other questions that are somewhat similar to this, but raise more questions than they answer. Amongst the suggestions I've seen are:

  1. Pass docker.sock into the API container; from the API container, exec into LRS and execute the intended script
    • Doesn't this create serious security vulnerabilities?
    • Doesn't this require that docker be installed on the API container in order to exec, violating the separation of concerns principle of docker?
  2. HTTP listener on the LRS container, listening for commands from the API in order to execute the script on LRS
    • Again, doesn't this violate separation of concerns, since I'll now essentially need a light weight API in the LRS container to listen to actions from the principal API?

None of this solutions seem ideal. Am I missing something? How do I achieve the intended functionality?

Upvotes: 1

Views: 269

Answers (1)

Rob Conklin
Rob Conklin

Reputation: 9446

Generally the solution to run long-running scripts has been a pub-sub model. Your API would drop a message onto an execution Message-Queue. The worker instance would subscribe to that queue, and when messages appear, would execute your long-running script/query/etc. When the execution is complete, either a message will go back on a different queue, or results will be placed in a predetermined location (url).

This has a couple of advantages:

  1. The two solutions are effectively isolated from each other
  2. You can scale out the LRS (worker) solution if you need more capacity by adding additional workers
  3. if the LRS instance goes down the API will not depend on it being up. Work will be queued for when an instance becomes available.

Upvotes: 2

Related Questions