Afraz
Afraz

Reputation: 847

Docker Swarm Service Can Not Be Reached on Different Node After Period

Problem

I have a service running in Docker Swarm in an overlay network, which I start like this:

docker service create \
  --name db-master \
  --label type=database \
  --constraint "engine.labels.type == database" \
  --network starnet \
  --mount type=bind,source=/var/lib/mysql,target=/var/lib/mysql \
  -e MYSQL_ALLOW_EMPTY_PASSWORD=yes \
  percona:5.6.32

The service starts fine, and is reachable as expected from inside a different service, running on a different node in the swarm -- only if "All traffic" is opened on the security group (I'm on AWS).

The db-master service above can not be reached from the other service on the other node if only the necessary ports (2377, 4789, 7946) are open.

Details of my Setup

What I've Tried

The Kicker

Despite the opening of all UDP and TCP ports not making any difference, if I open up "All traffic", it starts working immediately.

Obviously this points to something not being open between the nodes, but seeing as how I've tried opening up all UDP and TCP ports both separately and together, I'm struggling to figure out what this could be.

Upvotes: 3

Views: 486

Answers (1)

Afraz
Afraz

Reputation: 847

Turns out that as well as the documented ports, you will also need to open up a custom protocol called ECP (protocol 50 as a "Custom protocol" in an AWS security group).

Had to file a bug report to get the answer, but at least it'll be added to the documentation now :)

Upvotes: 1

Related Questions