suvankar
suvankar

Reputation: 1558

MongoDB repair command failed

Previously i was running out of disk space, and mongodb stopped working. Then I have increased disk size but mongodb does not start working.

Though i have journaling enabled, i have execute following command
sudo -u mongodb mongod --dbpath /var/lib/mongodb/ --repair

But this repair command got an exception and stop repairing and then exit.

    Fri Nov 30 13:29:36 [initandlisten] build index bd_production.news { _id: 1 }

    Fri Nov 30 13:29:36 [initandlisten]      fastBuildIndex dupsToDrop:0

    Fri Nov 30 13:29:36 [initandlisten] build index done.  scanned 2549 total
    records. 0.008 secs

    Fri Nov 30 13:29:36 [initandlisten]  bd_production.change_sets
    Assertion failure isOk() src/mongo/db/pdfile.h 360

  0x879d86a 0x85a9835 0x85e441e 0x84caa02 0x84c7d19 0x8229b5a 0x822bfd8
 0x875bd51 0x875f0c7 0x8760df4 0x83e6523 0x83b6c3b 0x8753b07 0x83b92bf
 0x8827ab7 0x882a53b 0x882d4bf 0x882d691 0x85ed280 0x81719dc

 mongod(_ZN5mongo15printStackTraceERSo+0x2a) [0x879d86a]

 mongod(_ZN5mongo10logContextEPKc+0xa5) [0x85a9835]
 ...
 ...
 ...
    ... some error msg 

    Fri Nov 30 13:29:36 [initandlisten] assertion 0 assertion
     src/mongo/db/pdfile.h:360 ns:bd_production.change_sets
    query:{}

     Fri Nov 30 13:29:36 [initandlisten] problem detected during query over
     bd_production.change_sets : { $err: "assertion
     src/mongo/db/pdfile.h:360" }

    Fri Nov 30 13:29:36 [initandlisten] query
    bd_production.change_sets ntoreturn:0 keyUpdates:0 exception:
    assertion src/mongo/db/pdfile.h:360  reslen:71 197ms

     Fri Nov 30 13:29:36 [initandlisten] exception in initAndListen: 13106
     nextSafe(): { $err: "assertion src/mongo/db/pdfile.h:360" }, terminating

     Fri Nov 30 13:29:36 dbexit:
    ...
    ...

news collection is repaired successfully but the 'change_set' does not repair successfully.

How can i repair that particular collection (change_set) or database ?

UPDATE: When I run mongodump with --repair for that change_set collection i got following error message:

Tue Dec  4 10:45:21 [tools]         backwards extent pass
Tue Dec  4 10:45:21 [tools]             extent loc: 5:1181e000
Tue Dec  4 10:45:21 [FileAllocator] allocating new datafile /home/suvankar/dd/bd_production.5, filling with zeroes...
Tue Dec  4 10:45:21 [FileAllocator] creating directory /home/suvankar/dd/_tmp
Tue Dec  4 10:45:21 [FileAllocator] done allocating datafile /home/suvankar/dd/bd_production.5, size: 511MB,  took 0.042 secs
Tue Dec  4 10:45:21 [tools]                 warning: Extent not ok magic: 0 going to try to continue
Tue Dec  4 10:45:21 [tools]                 length:0
Tue Dec  4 10:45:21 [tools]                     ERROR: offset is 0 for record which should be impossible
Tue Dec  4 10:45:21 [tools]                     wrote 1 documents
Tue Dec  4 10:45:21 [tools]             extent loc: 0:0
Tue Dec  4 10:45:21 [tools]                 ERROR: invalid extent ofs: 0
Tue Dec  4 10:45:21 [tools]                  5 objects
Tue Dec  4 10:45:21 dbexit: 
Tue Dec  4 10:45:21 [tools] shutdown: going to close listening sockets...
Tue Dec  4 10:45:21 [tools] shutdown: going to flush diaglog...
Tue Dec  4 10:45:21 [tools] shutdown: going to close sockets...
Tue Dec  4 10:45:21 [tools] shutdown: waiting for fs preallocator...
Tue Dec  4 10:45:21 [tools] shutdown: lock for final commit...

Upvotes: 2

Views: 9891

Answers (1)

Adam Comerford
Adam Comerford

Reputation: 21692

If mongod with repair is not doing it, then it is running into a level of corruption that it can't fix or work around in terms of having a valid and correct set of database files to start up.

You can run mongodump with repair, which is more aggressive in terms of trying to get around the corruption, and is not starting a mongod instance (hence does not require the files to be correct in order to proceed).

mongodump --repair --dbpath /var/lib/mongodb/ <other options here>

Be aware though, that because of the way it attempts to route around the corruption, you may end up with multiple copies of a document. With how mongorestore works this is not an issue, but depending on the level of corruption you can end up with dump files far larger than you would expect. In a very extreme case, I once saw 10x data produced, though that was the exception rather than the rule.

Once you have dumped everything out to your satisfaction, start mongod clean and re-import to get back to a good state.

Upvotes: 4

Related Questions