Chris W
Chris W

Reputation: 33

What is using so much uncommitted "private data" on Windows Server 2003?

So I have a native C++ application, and it needs to keep track of lots of things over long periods of time. It's running out of memory when task manager says that the process reaches somewhere between 800 and 1200 MB of memory, when the limit should be about 2GB.

I finally got a clue as to what's going on when I ran VMMap against my process, but that just gave me more questions. What I discovered:

An example of these sub-block pairs: (not my application, but the idea is the same) http://www.flickr.com/photos/95123032@N00/5280550393

It seems as though when one block of private data gets fully committed, a new block (usually the same or double the size of the previous largest block) gets allocated. Sounds fair. However, I have seen 3 blocks, all more than 100MB each, with less than 30MB committed. My application shouldn't behave in such a way (i.e. use up 400MB then shrink by 300MB in a matter of a few hours) that that would be possible.

As far as I can tell, the "Size" is the actual amount of virtual memory address space that has been allocated. "Committed" is the amount of "Size" that is actually being used (i.e. through calls to new/malloc). If that is indeed the case, then why is there such a huge discrepency between Size and Commited? And why is it allocating blocks that are multiple hundreds of megabytes in size?

The somewhat strange thing is that the behavior is entirely different when running on Windows 7. Whereas on 2003 Server, the application uses Private Data, on Windows 7, the application uses Heap. So...why? Why does VMMap show primarily private data usage on 2003, but primarily heap usage on 7? What's the difference? Well one difference is that I can't use the "Heap Allocations..." button in VMMap to see where all of that Private Data is being allocated.

I was beginning to wonder if excessive use of std::string was causing this problem since the strings that I recognized in the pairs (mentioned above) primarily consisted of strings stored in std::string that were frequently being created and destroyed (implying lots of memory allocation/deallocation). I converted all I could to use character arrays or using memory from a memory pool, but that seems to have had no effect. All of my other objects that are new/deleted frequently already have their own memory pools.

I also found out about the low fragmentation heap, so I tried enabling that, but it also didn't make a difference. I'm thinking it's because windows 2003 is not actually using the heap proper. VMMap shows that the low fragmentation heap is enabled, but since it's not actually used (i.e. it's using Private Data instead), it doesn't actually make a difference.

What actually seems to be happening is that those sub-block pairs are fragmenting the large Private Data blocks, which is causing the OS to allocate new blocks. Eventually, the fragmentation gets so bad that even though there's lots of uncommitted space, none of it seems to be usable and the process runs out of memory.

So my questions are:

  1. Why is Windows Server 2003 using Private Data instead of Heap? Does it matter? Is there a way to make Windows Server 2003 use Heap memory instead? If so, would that improve my situation at all?
  2. Is there any way to control how Private Data is allocated by the OS's memory allocator?
  3. Is it possible to create my own custom heap and allocate off of that (without changing the majority of my codebase), and could that improve my situation? I know it's possible to make custom heaps, but as far as I can tell, you need to explicitly allocate from the custom heap instead of just calling new or just using STL containers normally.
  4. Is there anything I'm missing or would be worth trying?

Upvotes: 3

Views: 3508

Answers (3)

Kert
Kert

Reputation: 31

I'm encountering the same issue.

In windows 2003, my application causes out of memory exception in a C++/CLI module when trying to allocate a 22MB array using gcnew. The same process works fine in windows 7.

VMMap shows the "private data" entry is almost 2 GB in win2003. After I enable /3GB flag, this entry also increased to almost 3GB. The "heap" entry is about 14 MB and the "managed heap" is nothing!

In windows 7, the "private data" is only 62 MB, the "heap" is 316MB and "managed heap" is 397MB. The entire memory usage is much less than win2003.

Upvotes: 0

Bryan
Bryan

Reputation: 12200

when task manager says that the process reaches somewhere between 800 and 1200 MB of memory, when the limit should be about 2GB

Probably you are looking at "Working Set" in Task Manager whereas the 2GB limit is on Virtual Memory. Task Manager doesn't show the amount of VM reserved; it will show the amount committed.

"Committed" is the amount of "Size" that is actually being used (i.e. through calls to new/malloc).

No, Committed means you actually touched the page (i.e. went to the address and did a load or store operation).

1.Why is Windows Server 2003 using Private Data instead of Heap?

According to "Windows Sysinternals Administrator's Reference" by Mark Russinovich and Aaron Margosis:

Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or by the .Net runtime

So either your program is managing its memory differently on the two OS's, or VMmap is unable to detect the way in which this memory is being managed as a heap on Windows Server 2003.

4.Is there anything I'm missing or would be worth trying?

You can run with a 3GB limit on 32-bit OS and a 4GB limit for 32-bit processes on 64-bit OS. Google for "/3G" and "/4G".

A great source of information on this kind of stuff is the book "Windows Internals 6th Edition" by Mark Russinovich, David Solomon and Alex Ionescu.

Upvotes: 0

nanda
nanda

Reputation: 804

Private data is just a classification for all the memory that is not shared between two or more processes. Heap, relocated dll pages, stacks of all the threads in a process, unshared memory mapped files etc. fall in to the category of private data.

A request for memory from a process (via VirtualAlloc) would be failed by OS when one of the condition is true,

  1. Contiguous virtual address space (not memory) is not available to hold the size requested.
  2. The commit charge - the total memory committed memory of all the process and the operating system - has reached it's upper limit (that being RAM + page file size)

Apart from this Heap allocations may fail for their own reasons like, during expansion they would actually try to acquire more memory that the size of the allocation request that triggered the expansion - and if that fails they might just fail - though the actual requested size might be available through VirtualAlloc.

Few things that tend to accumulate memory are,

  1. Having many heaps - they would hog memory - because they keep more in reserve. Many heaps means a lot of reserved space probably going unused. Heap compaction might help.
  2. STL containers like vector and map might not shrink after elements are removed from them. Compacting them might help too.
  3. Libraries like COM do some caching and thus accumulate memory - might help to investigate individual libraries to know about their memory hogging habits.

Upvotes: 3

Related Questions