Reputation: 4108
I have a rather complicated problem that I think boils down to the following. This morning, I had a replica set consisting of hosts A, B and C with A being the primary. Then I lost A completely and B might have been down for a short while (I don't know). It's an ec2 instance, so when it came back it had a different host name (though it had the exact same ebs volume and thus the same file structure).
So at this point, as far as host names A is gone, and I have B, C and D. The contents of D is same as what A was, but the external world views them as two different hosts (which they are). Logging into mongo for B and C shows that they are secondary (priority 0) and it still lists the old host A with no priority noted:
SECONDARY> rs.conf() //this is from C
{
"_id" : "rs_0",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "A:27018" //this is the dead guy ....
},
{
"_id" : 1,
"host" : "C:27019",
"priority" : 0
},
{
"_id" : 2,
"host" : "B:27020",
"priority" : 0
}
]
}
Anything command I issue from B or C, comes back with a message telling me I'm not the master, so I can't change any of the hosts in the conf record for this replica set.
Worst case scenario is I can use mongoexport
and dump everything to json which is (a) a pain the ass and (b) very VERY ugly and (c) not really practical when I'm in prod.
So basically, it boils down to this. What do I do when I have a replica set and I lose control/access to the primary and want to add another host to take over that functionality?
Thanks!
Upvotes: 1
Views: 5101
Reputation: 42342
When you must reconfigure without a primary you can send the commands to a secondary, but you must include an extra option: {force:true}. This says that you know you are not talking to the primary but you want to force a reconfiguration anyway.
Before you proceed though, I want to point out that Priority 0 on every secondary defeats the point of having replica set for automatic failover in case of primary failure. Priority 0 means the node can never become primary. Since the only non-0 node failed your replica set was left without a primary.
I recommend having at least one secondary with a priority score higher than 0 (1 is the default). I also recommend using external/resolvable DNS names for your hosts rather than AWS names so that if you find yourself in this situation again you can just reassign the name that used to point at the now dead host to the new host that took its place. In that case you won't need to reconfigure the replica set at all.
For further reading I recommend: http://docs.mongodb.org/manual/tutorial/reconfigure-replica-set-with-unavailable-members/
Upvotes: 2