Dorian Cransac
Dorian Cransac

Reputation: 75

Mongodb very verbose despite log configuration set to 0

My MongoDB instance produces gigabytes of logs (through the QUERY component) even though I set the global verbosity level to 0 and all components to -1 (verbosity inheritance).

This is a very bad impact on the instance's performance.

Below is a sample of the logged records, the output of db.stats() and of db.getLogComponents().

PS / EDIT - since I'm already seeing a vote down... : I've already tried everything I could find online (using a yaml config file with quiet = true, I checked that the profiling level was at 0, I even tried setting the Log Component to 0 explicitely, but nothing works, mongo is still logging those queries massively...)

Any idea as to why mongo keeps producing these log entries? Any suggestions to turn that off? Thanks.

2016-02-23T09:14:27.089+0100 I QUERY [conn526] query jdigger.stacktraces query: { hashcode: 730309037 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:2 reslen:1114 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 268ms 2016-02-23T09:14:27.089+0100 I QUERY [conn546] query jdigger.stacktraces query: { hashcode: 1 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:4 reslen:232 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 269ms 2016-02-23T09:14:27.089+0100 I QUERY [conn532] query jdigger.stacktraces query: { hashcode: -1176121626 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:3 reslen:2162 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 268ms 2016-02-23T09:14:27.089+0100 I QUERY [conn533] query jdigger.stacktraces query: { hashcode: -1452854181 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:1 reslen:888 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 269ms 2016-02-23T09:14:27.089+0100 I QUERY [conn529] query jdigger.stacktraces query: { hashcode: 401721954 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:2 reslen:776 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 269ms 2016-02-23T09:14:27.091+0100 I QUERY [conn524] query jdigger.stacktraces query: { hashcode: -73774731 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:1 reslen:311 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 269ms 2016-02-23T09:14:27.195+0100 I QUERY [conn534] query jdigger.stacktraces query: { hashcode: 1 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:4 reslen:232 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 278ms 2016-02-23T09:14:27.203+0100 I QUERY [conn525] query jdigger.stacktraces query: { hashcode: 1 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:4 reslen:232 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 268ms 2016-02-23T09:14:27.204+0100 I QUERY [conn528] query jdigger.stacktraces query: { hashcode: 401721954 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:2 reslen:776 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 269ms 2016-02-23T09:14:27.204+0100 I QUERY [conn547] query jdigger.stacktraces query: { hashcode: 196127445 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:2 reslen:1004 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 269ms 2016-02-23T09:14:27.204+0100 I QUERY [conn535] query jdigger.stacktraces query: { hashcode: -1176121626 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:3 reslen:2162 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 269ms 2016-02-23T09:14:27.205+0100 I QUERY [conn544] query jdigger.stacktraces query: { hashcode: 401721954 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:2 reslen:776 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 269ms 2016-02-23T09:14:27.207+0100 I QUERY [conn543] query jdigger.stacktraces query: { hashcode: 1 } planSummary: COLLSCAN ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:51110 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:399 nreturned:4 reslen:232 locks:{ Global: { acquireCount: { r: 800 } }, Database: { acquireCount: { r: 400 } }, Collection: { acquireCount: { r: 400 } } } 269ms

db.stats() { "db" : "jdigger", "collections" : 2, "objects" : 4821385, "avgObjSize" : 284.73536857147894, "dataSize" : 1372818835, "storageSize" : 395268096, "numExtents" : 0, "indexes" : 4, "indexSize" : 81928192, "ok" : 1 }

db.getLogComponents() { "verbosity" : 0, "accessControl" : { "verbosity" : -1 }, "command" : { "verbosity" : -1 }, "control" : { "verbosity" : -1 }, "executor" : { "verbosity" : -1 }, "geo" : { "verbosity" : -1 }, "index" : { "verbosity" : -1 }, "network" : { "verbosity" : -1, "asio" : { "verbosity" : -1 }, "bridge" : { "verbosity" : -1 } }, "query" : { "verbosity" : -1 }, "replication" : { "verbosity" : -1 }, "sharding" : { "verbosity" : -1 }, "storage" : { "verbosity" : -1, "journal" : { "verbosity" : -1 } }, "write" : { "verbosity" : -1 }, "ftdc" : { "verbosity" : -1 }

Upvotes: 0

Views: 2257

Answers (2)

Dorian Cransac
Dorian Cransac

Reputation: 75

I'm not sure whether this should be considered a workaround or an actual solution but I was able to reach my goal i.e reduce the general logging overhead by using db.setProfilingLevel(0, 1000).

As someone pointed out, Mongo seems to profile the "longlasting" queries per default, regardless of the Profiling Level setting or any other parameter.

Since most of my queries were 200 ms queries, setting the threshold at 1 second did the trick for me. Given the implementation details of my use case, the 200ms where not necessarly shocking, so that's where I didn't really fit in the "norm".

Upvotes: 1

a-h
a-h

Reputation: 4284

MongoDB's profiler automatically logs any queries which take longer than 100ms, so it could be that. You can switch it off, by using db.setProfilingLevel(0) - see documentation at https://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/

However, I'd also consider adding an index to your collection which is optimised for your query, because right now, the query is reading every record in the collection (COLLSCAN).

Upvotes: 0

Related Questions