Reputation: 4086
I'm trying to understand a MongoDB memory use pattern that I see in our MMS logs.
Normally, resident memory sits around 3GB, virtual memory is steady at 84GB, and mapped memory is about 41GB. Then in a series of peaks and troughs, usually for just a few minutes, mapped memory disappears completely, virtual memory drops to around 41GB, and resident memory 41GB or spikes to 84GB. In one recent episode, however, the peaks and troughs lasted 3.5 hours.
MongoDB appears to be running normally and other metrics such as opcounters and network are normal, but graphs suddenly changing dramatically when there was unlikely to be a significant load change makes me ... curious.
This is a standalone instance running MongoDB 1.8.3.
Typical memory usage, not during an episode (I only found the longer episode as it was ending):
$ free -m
total used free shared buffers cached
Mem: 32176 31931 245 0 628 29449
-/+ buffers/cache: 1854 30322
Swap: 1983 0 1983
What is causing this?
Upvotes: 3
Views: 610
Reputation: 2396
MMS gets memory statistics from the operating system by reading /proc/$PID/stat. The fluctuations in virtual and resident memory are reporting errors, and can safely be ignored.
(if you hover over the spikes, you'll notice that they occur at times when 1 or 2 of the 3 stats- virtual memory, mapped memory, or resident memory - is missing...)
Upvotes: 3