Reputation: 11
I am experimenting with storage clusters using RHEL9.3 and GFS2 with DRBD replication. So far I found a stable solution by using 3 nodes for main (one is DRBD Primary and mounts the DRBD disk, while the other 2 access the storage directly using the underlying device) and 3 nodes for the simulated remote site (one is DRBD Secondary which is in sync and the other 2 are in standby, these 3 nodes cannot mount the storage unless the failover happens). DRBD is managed by Pacemaker with a promotable clone (clone-max=2). GFS2 filesystem allow the nodes not partecipating in the DRBD resource to write on the storage directly, whenever a write happens on the Primary node it will sync all changes made to the underlying device and sync it to the geo replica.
Now I reached a point when I'm trying to setup automation for the fail-over, and I would like to move the mounts dynamically to the correct nodes using constraints.
Ideally it should be:
The same applies for both sites.
My issue is in crafting a location rule to go check which node has role=Promoted in my DRBD clone and then return a custom node attribute (site=geo/main) to the location rule to match the other nodes. So something like:
pcs location mount-clone rule score=INFINITY node2-main node3-main if condition (check attr site of node with role=Promoted in drbd-clone eq main) else node2-geo node3-geo
Is this doable/a nice approach to solve this issue?
Sorry in advance if I broke any community rules, I rarely post on the board. Also please forgive any hard to understand sentence cause it's not my native language and I'm still learning these topics.
Thanks
Upvotes: 0
Views: 26