Reputation: 11
I am trying to create a cluster between 2 nodes with 2 network interfaces each. The idea is that the cluster changes of node when in the node that is active some of its 2 interfaces fall (or the 2 logically). The problem is that the cluster only changes of node if the interface eth1 of the active node falls. If the interface eth0 of the active node falls, the cluster never changes nodes.
This is the network configuration of the nodes:
node1:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.3 netmask 255.255.255.248 broadcast 192.168.0.7
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.26.34.2 netmask 255.255.255.248 broadcast 172.26.34.7
node2:
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.4 netmask 255.255.255.248 broadcast 192.168.0.7
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.26.34.3 netmask 255.255.255.248 broadcast 172.26.34.7
These are the commands I use to create the cluster between the nodes and assign the resources:
pcs cluster auth node1 node2 -u hacluster -p 1234 --debug --force
pcs cluster setup --name HAFirewall node1 node2 --force
pcs cluster start --all
pcs resource create VirtualIP_eth0 ocf:heartbeat:IPaddr2 ip=192.168.0.1 cidr_netmask=29 nic=eth0 op monitor interval=30s --group InterfacesHA
pcs resource create VirtualIP_eth1 ocf:heartbeat:IPaddr2 ip=172.26.34.1 cidr_netmask=29 nic=eth1 op monitor interval=30s --group InterfacesHA
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
pcs resource enable InterfacesHA
This is the configuration of the corosync.conf file:
totem {
version: 2
secauth: off
cluster_name: HAFirewall
transport: udpu
}
nodelist {
node {
ring0_addr: node1
nodeid: 1
}
node {
ring0_addr: node2
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_logfile: yes
logfile: /var/log/corosync/corosync.log
to_syslog: yes
}
This is the output of the pcs status command:
Cluster name: HAFirewall
Stack: corosync
Current DC: node1 (version 1.1.16-94ff4df) - partition WITHOUT quorum
Last updated: Tue Oct 27 19:01:35 2020
Last change: Tue Oct 27 18:22:27 2020 by hacluster via crmd on node2
2 nodes configured
2 resources configured
Online: [ node1 ]
OFFLINE: [ node2 ]
Full list of resources:
Resource Group: InterfacesHA
VirtualIP_eth0 (ocf::heartbeat:IPaddr2): Started node1
VirtualIP_eth1 (ocf::heartbeat:IPaddr2): Started node1
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
This is the output of the crm configure show command:
node 1: node1
node 2: node2
primitive VirtualIP_eth0 IPaddr2 \
params ip=192.168.0.1 cidr_netmask=29 \
op start interval=0s timeout=20s \
op stop interval=0s timeout=20s \
op monitor interval=30s
primitive VirtualIP_eth1 IPaddr2 \
params ip=172.26.34.1 cidr_netmask=29 \
op start interval=0s timeout=20s \
op stop interval=0s timeout=20s \
op monitor interval=30s
group InterfacesHA VirtualIP_eth0 VirtualIP_eth1
location cli-prefer-InterfacesHA InterfacesHA role=Started inf: node1
property cib-bootstrap-options: \
stonith-enabled=false \
no-quorum-policy=ignore \
have-watchdog=false \
dc-version=1.1.16-94ff4df \
cluster-infrastructure=corosync \
cluster-name=HAFirewall
And these are the interfaces of node1 when it is active and has the virtual IPs up:
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether ac:1f:6b:90:a5:58 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.3/29 brd 192.168.0.7 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.0.1/29 brd 192.168.0.7 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe90:a558/64 scope link
valid_lft forever preferred_lft forever
eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether ac:1f:6b:90:a5:59 brd ff:ff:ff:ff:ff:ff
inet 172.26.34.2/29 brd 172.26.34.7 scope global eth1
valid_lft forever preferred_lft forever
inet 172.26.34.1/29 brd 172.26.34.7 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::ae1f:6bff:fe90:a559/64 scope link
valid_lft forever preferred_lft forever
Any idea why the cluster works perfectly when the eth1 interface is down and does not work when the etho interface is down?
Greetings and thanks.
Upvotes: 1
Views: 1677
Reputation: 66
i believe you need to specify both interfaces in corosync.conf:
interface {
ringnumber: 0
bindnetaddr: 192.168.0.4
...
interface {
ringnumber: 1
bindnetaddr: 172.26.34.3
...
Upvotes: 1