Reputation: 3034
We have a mongoDB replica set which has 3 nodes;
Somehow our replica set has ended up with 1 & 2 both being set as secondary members. I'm not sure how this has happened (we have had a server migration which one of the nodes runs on, but only one).
Anyway, I've been trying to re-elect a new primary for the replica set following the guide here
I'm unable to just use
rs.reconfig(cfg)
as it will only work if directed at the primary (which I don't have).
Using the force parameter
rs.reconfig(cfg, { force: true})
would appear to work but then when I requery the status of the replica set, both servers are still only showing as Secondary.
Why hasn't the force reconfig worked? Currently the database is locked out whatever I try.
Upvotes: 8
Views: 28789
Reputation: 1
I had same situation: because arbiter received information that he has most recent opTime timestamp.
It found in log: grep ELECTION /var/log/mongodb/mongod.log
"ARBITER-NODE:27017" ... "reason":"candidate's data is staler than mine. candidate's last applied OpTime: .."
The reason for this behavior is that the data-nodes were restored from a backup snapshot, while the arbitrator is not. if it is acceptable the solution is temporary stop arbiter node.
Upvotes: 0
Reputation: 1293
1.Convert all nodes to standalone.
Stop mongod deamon and edit /etc/mongod.conf
to comment replSet
option.
Start mongod deamon.
2.Use mongodump
to backup data for all nodes.
Reference from mongo docs:
https://docs.mongodb.com/manual/reference/program/mongodump/
3.Log into each node, and drop local
database.
Doing this will delete replica set config on the node.
Or you can just delete a record in collection system.replset
in local db, like it said here:
https://stackoverflow.com/a/31745150/4242454
4.Start all nodes with replSet
option.
5.On the previous data node (not arbiter), initialize a new replica set.
6.Finally, reconfig replica set with rs.reconfig
.
Upvotes: 6