user3162553
user3162553

Reputation: 2869

Populating docker-machine with docker-compose

Edit How can I start containers on a docker-machine with docker compose?

I have provisioned 3 docker machines and joined two as workers to one master machine. However, none of my code or services seem to be present in the machine. How can I run a docker-compose file on a docker machine?

Normally in development I run docker-compose up. However, docker-compose is unavailable on the box. I'm not sure at what point the machine starts running the docker containers or how that can be done. I passed the docker-compose.prod.yml as part of the creating the stack. echo docker stack deploy -c docker-compoe.prod.yml app.

Very new to all of this coming from Heroku. Just wondering if the only way to do this is to ssh into every box and manually copy files over.

I would expect to be able to docker-machine ssh host. And then inside of that shell be able to docker-compose up.

If it's true that I have to manually rsync all of the dirs over, I'm curious what the stack actually does?

Edit

version: "3"

services:
  api:
    image: "api"
    command: rails server -b "0.0.0.0" -e production
    depends_on:
      - db
      - redis
    deploy:
      replicas: 3
      resources:
        cpus: "0.1"
        memory: 50M
      restart_policy:
        condition: on-failure
    env_file:
      - .env-prod
    networks:
      - apinet
    ports:
      - "3000:3000"
  client:
    image: "client"
    depends_on:
      - api
    deploy:
      restart_policy:
        condition: on-failure
    env_file:
      - .env-prod
    networks:
      - apinet
      - clientnet
    ports:
      - "4200:4200"
      - "35730:35730"
  db:
    deploy:
      placement:
        constaints: [node.role == manager]
      restart_policy:
        condition: on-failure
    env_file: .env-prod
    image: mysql
    ports:
      - "3306:3306"
    volumes:
      - ~/.docker-volumes/app/mysql/data:/var/lib/mysql/data
  redis:
    deploy:
      placement:
        constaints: [node.role == manager]
      restart_policy:
        condition: on-failure
    image: redis:alpine
    ports:
      - "6379:6379"
    volumes:
      - ~/.docker-volumes/app/redis/data:/var/lib/redis/data
  nginx:
    image: app_nginx
    deploy:
      restart_policy:
        condition: on-failure
    env_file: .env-prod
    depends_on:
      - client
      - api
    networks:
      - apinet
      - clientnet
    ports:
      - "80:80"
networks:
  apinet:
    driver: overlay
  clientnet:
    driver: overlay

Upvotes: 4

Views: 1153

Answers (1)

Rick van Lieshout
Rick van Lieshout

Reputation: 2316

You wouldn't use 1 docker to run other dockers. That is not what compose is used for.

You have to set up a docker compose script in which all the machines are "linked" so that they can talk to each other. Whatever you do after that depends on the software you'd like to run. (e.g if 1 host has to use other nodes you'd provide a config file for the master machine where / how it would find the nodes).

An example:

Take a look at the docker-compose file I made for my ELK-stack In that file I declare 2 machines (Elasticsearch and Kibana) and I link them together with the (now deprecated) links tag. (this now happens semi-automagically, see the documentation for more details.)

Then in the kibana.yml file (at line 10) I point the Kibana machine to the Elasticsearch machine (using it's docker-compose name/linkname).

This enables the Kibana instance (imagine this would be a host) to talk to its respective Elasticsearch instance (image this would be a slave machine).

Next up your last statement:

I would expect to be able to docker-machine ssh host. And then inside of that shell be able to docker-compose up.

This is true to some extent, you can get access to an interactive (bash) shell in the Docker by using the following command:

docker exec --ti {ID/NAME of docker} /bin/bash

This won't allow you to run docker-compose though as this would most certainly lead to a variation of the infamous Droste effect:

Droste effect - https://www.flickr.com/photos/subblue/sets/72157609140149244/

Picture source.

Your new question

If it's true that I have to manually rsync all of the dirs over, I'm curious what the stack actually does?

No you do not have to manually do this, you can use persistent storage in your containers to access the data from the other folders. You'd basically assign a local folder to hold the data and then "bind" that folder to the docker machine(s). Thus allowing the dockers to access the data freely. Read the documentation for more info.

As for the stack, stack is used in the docker cloud and serves the same general purpose as the docker-compose script. It does offer some extra deployment options seeing as you deploy to docker-cloud rather than a single docker instance. Again, read the documentation for more info.

Your docker-in-docker edit

Because of your latest edit I feel obligated to go into more detail about the docker-in-docker thing.

Running docker containers in docker is generally not recommended. It is possible though, it has even been featured in the docker blog.

Let me reiterate though, your use case requires sibling containers not nested containers. Please read my post again but keep an open mind to the idea of sibling containers. If you have any (specific) questions about it I will be able to help you way better ;)

Setting up a "docker-machine"

Setting up docker can be done a thousand ways, one of the ways I do it can be found at this github page.

I wrote that tutorial for a friend, it covers the install of docker and the auto startup of docker machines (using both docker run and docker-compose).

Take from it what you will, it has served me well in the past :)

I highly recommend installing Portainer to manage / check up on your containers.

Upvotes: 3

Related Questions