spectrum
spectrum

Reputation: 109

Docker-swarm across multiple hosts using same docker-compse file

I am building a docker swarm across 3 hosts for the following services, Grakn, Redis, Elasticsearch, MinIO and RabbitMQ.

My queries are this,

  1. Can i use one docker-compose.yml so that everything builds across 3 hosts? Or we need to have 3 docker-compose.yml file?
  2. In order to have HA, I also want to build 3 more host so that say, if one host (physical) fails, the services which are running on this be transfered to other one and service wont be interrupted.
  3. Can i use docker stack here, if so how?
services:
  grakn:
    image: graknlabs/grakn:1.7.2
    ports:
      - 48555:48555
    volumes:
      - grakndata:/grakn-core-all-linux/server/db
    restart: always
  redis:
    image: redis:6.0.5
    restart: always
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    volumes:
      - esdata:/usr/share/elasticsearch/data
    environment:
      - discovery.type=single-node
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
  minio:
    image: minio/minio:RELEASE.2020-05-16T01-33-21Z
    volumes:
      - s3data:/data
    ports:
      - "9000:9000"
    environment:
      MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
      MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
    command: server /data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3
    restart: always
  rabbitmq:
    image: rabbitmq:3.8-management
    environment:
      - RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
    restart: always

Upvotes: 1

Views: 505

Answers (1)

rokpoto.com
rokpoto.com

Reputation: 10720

Can i use one docker-compose.yml so that everything builds across 3 hosts? Or we need to have 3 docker-compose.yml file?

Yes, you should use one docker-compose.yml file. There you declare services and their desired state including number of replicas.

In order to have HA, I also want to build 3 more host so that say, if one host (physical) fails, the services which are running on this be transfered to other one and service wont be interrupted.

If you initialized a cluster of Docker Engines in swarm mode and these engines run on different hosts, service replicas can run on any host. (unless you restrict service placement using Docker labels)

Can i use docker stack here, if so how?

Yes, run docker stack deploy --compose-file [Path to a Compose file]

Upvotes: 1

Related Questions