Jason
Jason

Reputation: 2915

Docker on Multipass VMs: Connecting worker nodes to swarm results in rcp error

I'm using Multipass on an Ubuntu host to launch multiple local VMs, with the goal to create a docker swarm over multiple VMs. Everything with multipass itself and the installation of Docker (I've used the scripts at https://get.docker.com/) has gone well.

I've also been able to initialize the swarm by setting up node1 as the manager through docker swarm init --advertise-addr <VARIOUS_IPs>, where VARIOUS_IPs has been any one of various I.Ps:

  1. 127.0.0.1 (as per this SO post)
  2. 172.17.0.1, as per the output that I get when logging into node1 through multipass shell node for the docker0 interface:
jason@jason-ubuntu-desktop:~$ multipass shell node1
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-101-generic x86_64)

 .
.
.

  System load:  0.0               Processes:                97
  Usage of /:   45.4% of 4.67GB   Users logged in:          0
  Memory usage: 31%               IPv4 address for docker0: 172.17.0.1
  Swap usage:   0%                IPv4 address for ens3:    10.126.204.207


Expanded Security Maintenance for Applications is not enabled.
.
.
.
  1. 10.126.204.207, which is the I.P assigned to the interface ens3 as you can see in the command above.

  2. 192.168.2.6, which is what ifconfig -a gives the interface enp5s0 on the HOST machine:

jason@jason-ubuntu-desktop:~$ ifconfig -a
enp5s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.2.6  netmask 255.255.255.0  broadcast 192.168.2.255
        inet6 2a02:85f:e0c9:8900:3523:25bb:2763:1923  prefixlen 64  scopeid 0x0<global>
        inet6 2a02:85f:e0c9:8900:b0e8:1b53:b5c4:eddc  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::4277:203e:fe56:63a8  prefixlen 64  scopeid 0x20<link>
        ether 08:bf:b8:75:50:9b  txqueuelen 1000  (Ethernet)
        RX packets 3135402  bytes 4280258798 (4.2 GB)
        RX errors 0  dropped 19  overruns 0  frame 0
        TX packets 2289436  bytes 233651001 (233.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 10189  bytes 1551793 (1.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10189  bytes 1551793 (1.5 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

mpqemubr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.126.204.1  netmask 255.255.255.0  broadcast 10.126.204.255
        inet6 fe80::5054:ff:fe50:214b  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:50:21:4b  txqueuelen 1000  (Ethernet)
        RX packets 616652  bytes 38339106 (38.3 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 815903  bytes 1211324469 (1.2 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap-7d21c24c2a2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::3cd8:e7ff:fe95:b7b7  prefixlen 64  scopeid 0x20<link>
        ether 3e:d8:e7:95:b7:b7  txqueuelen 1000  (Ethernet)
        RX packets 78108  bytes 5871486 (5.8 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 106416  bytes 158301533 (158.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap-9f0a4d14af6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::6073:fcff:fe8d:fe22  prefixlen 64  scopeid 0x20<link>
        ether 62:73:fc:8d:fe:22  txqueuelen 1000  (Ethernet)
        RX packets 80889  bytes 6172614 (6.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 106378  bytes 158520504 (158.5 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap-f33ea83d210: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::f433:24ff:feb1:3f5f  prefixlen 64  scopeid 0x20<link>
        ether f6:33:24:b1:3f:5f  txqueuelen 1000  (Ethernet)
        RX packets 79189  bytes 5937738 (5.9 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 106441  bytes 158499276 (158.4 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  1. 192.168.2.255, which is what ifconfig -a returns as broadcast for the enp5s0 interface, as you can see above.

  2. 10.126.204.1, which is what ifconfig -a returns for the mpqemubr0 interface, as you can see above.

  3. The public I.P associated with my router (not pasting that one for security reasons).

No matter which I.P I use, I successfully start up a swarm, e.g, for 172.17.0.1:

ubuntu@node1:~$ docker swarm init --advertise-addr 172.17.0.1
Swarm initialized: current node (1uih27t5jrmoe56hg6ko6zc7u) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token <TOKEN>172.17.0.1:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

However, when I paste the generated swarm join command in either one of the other two nodes, after a wait of about 5 seconds, I get:

Error response from daemon: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.17.0.1:2377: connect: connection refused"

I'm wondering what I.P I should be advertising in order to make the worker nodes join the swarm started from the manager node.

I also wanted to point out that the Ubuntu default firewall seems to be disabled:

root@jason-ubuntu-desktop:/home/jason# ufw status
Status: inactive

Upvotes: 0

Views: 74

Answers (0)

Related Questions