Eli
Eli

Reputation: 4359

execute binary on linked container in docker

I have 3 Docker containers, one running nginx, another php and one running serf by hashicorp.

I want to use the php exec function to call the serf binary to fire off a serf event

In my docker compose I have written

version: '2'
services:
  web:
    restart: always
    image: `alias`/nginx-pagespeed:1.11.4
    ports:
      - 80
    volumes:
      - ./web:/var/www/html
      - ./conf/nginx/default.conf:/etc/nginx/conf.d/default.conf
    links:
      - php
    environment:
      - SERVICE_NAME=${DOMAIN}
      - SERVICE_TAGS=web
  php:
    restart: always
    image: `alias`/php-fpm:7.0.11
    links:
      - serf
    external_links:
      - mysql
    expose:
      - "9000"
    volumes:
      - ./web:/var/www/html
      - ./projects:/var/www/projects
      - ./conf/php:/usr/local/etc/php/conf.d
  serf:
    restart: always
    dns: 172.17.0.1
    image: `alias`/serf
    container_name: serf
    ports:
      - 7496:7496
      - 7496:7496/udp
      command: agent -node=${SERF_NODE} -advertise=${PRIVATE_IP}:7496 -bind=0.0.0.0:7496

I was imagining that I would do something like in php exec('serf serf event "test"') where serf is the hostname of the container.

Or perhaps someone can give an idea of how to get something like this set up using alternative methods?

Upvotes: 3

Views: 1235

Answers (1)

BMitch
BMitch

Reputation: 263856

The "linked" containers allow network level discovery between containers. With docker networks, the linked feature is now considered legacy and isn't really recommended anymore. To run a command in another container, you'd need to either open up a network API functionality on the target container (e.g. a REST based http request to the target container), or you need to expose the host to the source container so it can run a docker exec against the target container.

The latter requires that you install the docker client in your source container, and then expose the server with either an open port on the host or mounting the /var/run/docker.sock in the container. Since this allows the container to have root access on the host, it's not a recommended practice for anything other than administrative containers where you would otherwise trust the code running directly on the host.

Only other option I can think of is to remove the isolation between the containers with a shared volume.

An ideal solution is to use a message queuing service that allows multiple workers to spin up and process requests at their own pace. The source container sends a request to the queue, and the target container listens for requests when it's running. This also allows the system to continue even when workers are currently down, activities simply queue up.

Upvotes: 5

Related Questions