Reputation: 3439
I have been trying to run a single node docker swarm for testing on RHEL 7.6. firewalld
is disabled and not running. Services are running on an overlay
network. I noticed that I can't connect to the published port either from the host or outside. This behaviour is consistent for a few RHEL instances I tried. I do use docker swarm on Ubuntu 16.04LTS and 18.04LTS without any glitches.
Given below is my docker info
Client:
Debug Mode: false
Server:
Containers: 14
Running: 3
Paused: 0
Stopped: 11
Images: 4
Server Version: 19.03.3
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: fhewk7l15g42o36henpfigwjk
Is Manager: true
ClusterID: kegypzam66ehi6s50utrsff1l
Managers: 1
Nodes: 1
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 10.0.1.125
Manager Addresses:
10.0.1.125:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-957.5.1.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.6 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.33GiB
Name: rhel-test.dev.koopid.io
ID: IM3X:THRY:FYUO:L7XI:VJW6:5B4Y:VZOX:YL43:E7WR:U5GM:3BQK:NLKP
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
And my overlaynet
[
{
"Name": "overlaynet",
"Id": "4g4dphekzyshqpcp0fjfmc877",
"Created": "2019-10-18T14:29:06.284905975Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.20.0.0/24",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"142c22a7e517f463f37c89cfb58dcde37f9529c9b469357b37868057be044e48": {
"Name": "dbsvcs_redis.1.0lsxkr88eq89igid7w7ifk3wq",
"EndpointID": "167fbdfb2146f09bb20c258fea52d9f8ca886cf1d264b1d8cd9169532c26b9db",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/24",
"IPv6Address": ""
},
"2e70a7589f13c74be66149d5bbf9504b5b74aee1ad6711f82ec4b02011c00cc1": {
"Name": "dbpg_postgresql-rw.1.9keeuowk9zk5e6f8bq5a0itij",
"EndpointID": "44a2376b4d0d2bdb8787c9cc18726da140ca0f9a8e97e54a6a78b2206e10a13b",
"MacAddress": "02:42:ac:14:00:06",
"IPv4Address": "172.20.0.6/24",
"IPv6Address": ""
},
"d9119bb3d605aa9b2df23985cd884afa941499d888937e3c34f4ec08dac14c73": {
"Name": "dbsvcs_influxdb.1.ap5cg0se1rntdbsopxbm7whma",
"EndpointID": "d2a5c093a0721291a114309ef1fd690510b03007fdaf83c8d77e00870a1568cd",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/24",
"IPv6Address": ""
},
"lb-overlaynet": {
"Name": "overlaynet-endpoint",
"EndpointID": "2bdf0d2370856d9a4b2da1e86d65521585ffc89c778f5db1d3f4b2fd39da7c8b",
"MacAddress": "02:42:ac:14:00:08",
"IPv4Address": "172.20.0.8/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097"
},
"Labels": {},
"Peers": [
{
"Name": "80ab8f4e3bcd",
"IP": "10.0.1.125"
}
]
}
]
I've the following services and as you notice, all of them publish one or two ports.
4j7p43udxkoc dbpg_postgresql-rw replicated 1/1 myregistry/postgres *:5432->5432/tcp
hu0wkspwc7j3 dbsvcs_influxdb replicated 1/1 myregistry/influxdb *:8086->8086/tcp
dlte2nzg226x dbsvcs_redis replicated 1/1 myregistry/redis *:6379->6379/tcp
And you can see that port 5432 is open for INADDR_ANY on the host
tcp6 1 0 :::5432 :::* LISTEN
However, I can't connect to port 5432 from the host our outside. psql
client times out as if some firewall is blocking the connection.
I can see the following errors if I enable firewalld
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker_gwbridge -o docker_gwbridge -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -nL DOCKER-INGRESS' failed: iptables: No chain/target/match by that name.
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -nL DOCKER-INGRESS' failed: iptables: No chain/target/match by that name.
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -nL DOCKER-INGRESS' failed: iptables: No chain/target/match by that name.
firewalld[2809]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -nL DOCKER-INGRESS' failed: iptables: No chain/target/match by that name.
Is it something I should worry about? Do I need to fiddle around with iptables
on RHEL to get docker swarm working. There are some reports to add docker control ports to iptables
for multi-node cluster configuration. My iptable
configuration is something like this...
$ iptables -L -v -n --line-numbers
Chain INPUT (policy ACCEPT 82507 packets, 8110K bytes)
num pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 30 5664 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
2 30 5664 DOCKER-INGRESS all -- * * 0.0.0.0/0 0.0.0.0/0
3 30 5664 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
4 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
5 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
6 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
7 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
8 14 4064 ACCEPT all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
9 0 0 DOCKER all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
10 16 1600 ACCEPT all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
11 0 0 DROP all -- docker_gwbridge docker_gwbridge 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 82105 packets, 8106K bytes)
num pkts bytes target prot opt in out source destination
Chain DOCKER (2 references)
num pkts bytes target prot opt in out source destination
Chain DOCKER-INGRESS (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5432
2 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED tcp spt:5432
3 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6379
4 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED tcp spt:6379
5 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8086
6 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED tcp spt:8086
7 30 5664 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num pkts bytes target prot opt in out source destination
1 0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2 16 1600 DOCKER-ISOLATION-STAGE-2 all -- docker_gwbridge !docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 30 5664 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
num pkts bytes target prot opt in out source destination
1 0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
2 0 0 DROP all -- * docker_gwbridge 0.0.0.0/0 0.0.0.0/0
3 16 1600 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 30 5664 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Appreciate some help/direction to get this working on RHEL as I'm stuck on this for the last couple of weeks. Configuring and running docker swarm
on Ubuntu was a breeze!!!
Upvotes: 0
Views: 2390
Reputation: 3439
Here is how I got it working finally. I don't have a reasoning for all steps. I also noticed that I can't connect to ports published by services from localhost
and also firewalld
rules get messed up at time which requires a reboot. I'm still investigating these issues. I followed the answer by Bertrand_Szoghy to install docker-ce
and related packages first.
firewalld
or ipchain
needs to be installed on the server. firewalld
is recommended on RHEL 7 or later. docker swarm
ports using firewalld. Follow the tutorial here. Also, make sure to open ports required by your services. Reload firewall rules (firewall-cmd --reload
)docker swarm init
)docker network create --subnet 172.20.1.0/24 --driver overlay --attachable overlaynet
)I noticed that the firewall configuration is important before initializing docker swarm
. I was not able to connect to published ports from localhost or using the host IP when I updated firewalld configuration after initializing docker swarm
. I'm not sure why this order matters though.
Currently I'm able to connect to published service port via the swarm manager
IP address from swarm manager itself or from outside the host. I'm still investigating what firewall rules to be added for connecting via localhost
.
Upvotes: 1