Eduardo Campos
Eduardo Campos

Reputation: 134

Docker Swarm with multiple hosts (different physical machines)

As a bit of context, I am fairly new to Docker and Docker-compose and until recently I've never even heard of Docker Swarm. I should not be the one responsible for the task I've been given but it's not like I can offload it to someone else...

So, the idea is to have two different physical machines to host a web server. One of the machines will run an Express.js server plus a Redis database, while the other machine hosts the system database (a Postgres DB).

Up until now I had a docker-compose.yaml file which created all these services and ran them.

version: '3.8'
services:
  server:
    image: server
    build:
      context: .
      target: build-node
    volumes:
      - ./:/src/app
      - /src/app/node_modules
    container_name: server
    ports:
      - 3000:3000
    depends_on:
      - postgres
      - redis
    entrypoint:
      ['./wait-for-it.sh', '-t', '30', 'postgres:5432', '--', 'yarn', 'dev']
    networks:
      - servernet

  # postgres database
  postgres:
    image: postgres
    user: postgres
    restart: always
    environment:
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - PGDATA=/var/lib/postgresql/data/pgdata
    volumes:
      - ./data:/var/lib/postgresql/data # persist data even if container shuts down
      - ./db_scripts/startup.sh:/docker-entrypoint-initdb.d/c_startup.sh
      #- ./db_scripts/db.sql:/docker-entrypoint-initdb.d/a_db.sql
      #- ./db_scripts/db_population.sql:/docker-entrypoint-initdb.d/b_db_population.sql
    ports:
      - '5432:5432'
    networks:
      - servernet

  # pgadmin for managing postgis db (runs at localhost:5050)
  # To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
  pgadmin:
    image: dpage/pgadmin4
    restart: always
    environment:
      - PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
      - PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
    depends_on:
      - postgres
    ports:
      - 5050:80
    networks:
      - servernet

  redis:
    image: redis
    networks:
      - servernet

networks:
  servernet:

I would naturally run this script with docker-compose up and that was the end of my concerns, everything running together on localhost. But now, with this setup I have no idea what to do. From what I've read, I have to create a swarm, but then how do I go about running everything from the same place (or with one command)? And how do I specify which services are to be executed on which machine?

Additionally, here is my Dockerfile in case it's useful:

FROM node as build-node

WORKDIR /src/app

COPY package.json .
COPY yarn.lock  .
COPY wait-for-it.sh .

COPY . .
RUN yarn
RUN yarn style:fix
RUN yarn lint:fix
RUN yarn build
EXPOSE 3000
ENTRYPOINT yarn dev

Is my current docker-compose script even capable of being used with this new setup?

This is really over my head and I've got not idea where to start. Docker documentation is also a bit confusing since I don't have much knowledge of Docker to begin with...

Thanks in advance!

Upvotes: 0

Views: 2046

Answers (1)

Kamèl Romdhani
Kamèl Romdhani

Reputation: 370

You first need to learn what's docker swarm and how it works

Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple containers deployed across multiple hosts machines.

to answer your questions briefly: how do I go about running everything from the same place?

you can use docker stack deploy command to deploy a set of services and yes you run it from one host machine, you don't have to run it on different machines, and that machine we call it master node

The good news is that you can still use your docker-compose file, with slight modifications maybe.

So to summarize the steps that you need to do are the following:

  • install docker swarm (1 master and 1 worker as you have only 2 machines)

  • make sure it's working fine (communication between nodes)

  • prepare your docker-compose file and deploy your stack from the
    master node

Upvotes: 1

Related Questions