Reputation: 23
I'm trying to setup a mysql-cluster on a docker swarm setup. Given we have 3 nodes (1 manager, 2 workers) we are trying to install it on the manager node.
This is the my.cnf file (correctly read)
[mysqld]
ndbcluster
ndb-connectstring=management1
user=mysql
skip_name_resolve
[mysql_cluster]
ndb-connectstring=management1
This is the mysql-cluster.cnf file (correctly read)
[ndbd default]
NoOfReplicas=2
DataMemory=80M
[ndb_mgmd]
HostName=management1
DataDir=/var/lib/mysql-cluster
[ndbd]
HostName=ndb1
DataDir=/var/lib/mysql-cluster
[ndbd]
HostName=ndb2
DataDir=/var/lib/mysql-cluster
[mysqld]
HostName=mysql1
Docker compose file (deployed from git repository via portainer) executes ex: docker stack deploy --compose-file docker-compose.yml vossibility
version: '3.3'
services:
management1:
image: mysql/mysql-cluster
command: ndb_mgmd
networks:
- "meroex-network"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
ndb1:
image: mysql/mysql-cluster
command: ndbd
networks:
- "meroex-network"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
ndb2:
image: mysql/mysql-cluster
command: ndbd
networks:
- "meroex-network"
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
mysql1:
image: mysql/mysql-cluster
ports:
- "3306:3306"
restart: always
command: mysqld
depends_on:
- "management1"
- "ndb1"
- "ndb2"
networks:
- "meroex-network"
deploy:
placement:
constraints:
- node.role == manager
networks:
meroex-network:
external: true
The network is an overlay network with subnet/24
[
{
"Name": "meroex-network",
"Id": "vs7lmefftygiqkzfxf9u4dqxi",
"Created": "2021-10-07T06:29:10.608882532+08:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.3.0/24",
"Gateway": "10.0.3.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
...
"lb-meroex-network": {
"Name": "meroex-network-endpoint",
"EndpointID": "a82dd38ffeb66e3a365140b51d8614fdf08ca0f0ffb01c8262a16bde49c891ad",
"MacAddress": "02:42:0a:00:03:34",
"IPv4Address": "10.0.3.52/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {},
"Peers": [
...
]
}
]
When deploying the stack we receive the following error in de management1 service:
2021-10-07 00:03:34 [MgmtSrvr] ERROR -- at line 33: Could not resolve hostname [node 1]: management1
2021-10-07 00:03:34 [MgmtSrvr] ERROR -- Could not load configuration from '/etc/mysql-cluster.cnf'
I'm stuck on why the service names are not resolved in this case. I have numerous other spring boot apps that can share their service names to communicate.
Upvotes: 1
Views: 615
Reputation: 84
It might be that the name lookup for some hostname appear before any address is assigned and published by the name server.
The management server will by default verify all hostnames appearing in the configuration, if some lookup fails the management server will fail to start.
Since MySQL Cluster 8.0.22 there is a configuration parameter to allow management server to start without successfully verified all hostnames, suitable for environment there hosts appear on demand with possibly new ip address each time.
Try add the following to your mysql-cluster.cnf
[tcp default]
AllowUnresolvedHostnames=1
See manual: https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-tcp-definition.html
Upvotes: 0