Allyl Isocyanate
Allyl Isocyanate

Reputation: 13626

Trying to setup Mongo replication, but end up with two secondary members and no primary

I've been trying to setup a simple replication system. 1 main mongo, 1 backup, and 1 arbiter.

Unfortunately, firing it up lead to main being elected SECONDARY, and the backup being elected PRIMARY (nice work arbiter).

Main had a priority of 100, and backup a priority of 0, along with a slave delay.

I tried to tell the backup to step down via:

PRIMARY>   db.runCommand({replSetReconfig: conf})
{
        "assertion" : "initiation and reconfiguration of a replica set must
be sent to a node that can become primary",
        "assertionCode" : 13420,
        "errmsg" : "db assertion failure",
        "ok" : 0
}
PRIMARY> .adminCommand({replSetStepDown:1000000, force:1})
Fri Jan 13 17:27:29 SyntaxError: syntax error (shell):1
PRIMARY> db.adminCommand({replSetStepDown:1000000, force:1})
Fri Jan 13 17:27:36 DBClientCursor::init call() failed
Fri Jan 13 17:27:36 query failed : admin.$cmd { replSetStepDown:
1000000.0, force: 1.0 } to: 127.0.0.1
Fri Jan 13 17:27:36 Error: error doing query: failed shell/
collection.js:151
Fri Jan 13 17:27:36 trying reconnect to 127.0.0.1
Fri Jan 13 17:27:36 reconnect 127.0.0.1 ok
SECONDARY>
SECONDARY>
SECONDARY> db.adminCommand({replSetStepDown:1000000, force:1})
{ "errmsg" : "not primary so can't step down", "ok" : 0 }

Which worked, but main is still seconday as well.

Any ideas? Thanks!

config file

conf = {
  version: 90002,
  _id : "example",
  members: [
    {
      _id : 1,
      host : "main.example.com:27017",
      priority: 100
    },
    {
      _id : 2,
      host : "backup.example.com:27017",
      priority: 0,
      slaveDelay : 3600
    },
    {
      _id : 3,
      host : "arbiter.example.com:27017",
      priority: 0,
      arbiterOnly: true
    }
  ]
};

rs.status on main()

{
  "set" : "example",
  "date" : ISODate("2012-01-13T23:29:09Z"),
  "myState" : 2,
  "members" : [
    {
      "_id" : 1,
      "name" : "main.example.com:27017",
      "health" : 1,
      "state" : 2,
      "stateStr" : "SECONDARY",
      "optime" : {
        "t" : 1326496827000,
        "i" : 1
      },
      "optimeDate" : ISODate("2012-01-13T23:20:27Z"),
      "self" : true
    },
    {
      "_id" : 2,
      "name" : "backup.example.com:27017",
      "health" : 1,
      "state" : 2,
      "stateStr" : "SECONDARY",
      "uptime" : 324,
      "optime" : {
        "t" : 1326492641000,
        "i" : 1
      },
      "optimeDate" : ISODate("2012-01-13T22:10:41Z"),
      "lastHeartbeat" : ISODate("2012-01-13T23:29:09Z"),
      "pingMs" : 0
    },
    {
      "_id" : 3,
      "name" : "arbiter.example.com:27017",
      "health" : 1,
      "state" : 7,
      "stateStr" : "ARBITER",
      "uptime" : 324,
      "optime" : {
        "t" : 0,
        "i" : 0
      },
      "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
      "lastHeartbeat" : ISODate("2012-01-13T23:29:09Z"),
      "pingMs" : 0
    }
  ],
  "ok" : 1

}

rs.status() on backup

{
  "set" : "example",
  "date" : ISODate("2012-01-13T23:31:06Z"),
  "myState" : 2,
  "members" : [
    {
      "_id" : 0,
      "name" : "BACKUPVMW02:27017",
      "health" : 1,
      "state" : 2,
      "stateStr" : "SECONDARY",
      "optime" : {
        "t" : 1326492641000,
        "i" : 1
      },
      "optimeDate" : ISODate("2012-01-13T22:10:41Z"),
      "self" : true
    }
  ],
  "ok" : 1

Upvotes: 3

Views: 2412

Answers (2)

nnythm
nnythm

Reputation: 3320

you might be using an old version of mongodb--the field that they used for voting was called "votes" then, I think, not priority. When they switched over to the "priority" field, you should be calling rs.reconfig(configfile), I think.

Upvotes: 1

Allyl Isocyanate
Allyl Isocyanate

Reputation: 13626

It turns out have a 3 node setup, with 1 primary, 1 slave, and 1 arbiter doesn't work well. Removed slave delay, and priority was respected (after reinstalling all nodes from scratch and removing data directories)

Upvotes: 1

Related Questions