jishi
jishi

Reputation: 24614

Memory usage between 32-bit pool and 64-bit pool

We have an ASP.NET application, built around MonoRail and NHibernate, and I have noticed a strange behavior between if running it with 64-bit mode or 32-bit mode. Everything is compiled as AnyCPU, and runs fine with both modes, but the memory usage differs dramatically.

Look at the following snapshots from ANTS:

32bit_snapshot: enter image description here

vs

64bit_snapshot: enter image description here

The usage scenario for both snapshots are pretty much equivalent (I have hit the same pages on both runs).

Firstly, why is the Unused memory so high in 64-bit mode? And why would unmanaged memory be 4 times the size on 64-bit mode?

Any insight on this would be really helpful.

Upvotes: 21

Views: 5846

Answers (3)

CIGuy
CIGuy

Reputation: 5114

The initial memory allocation for a 64 bit process is much higher than it would be for an equivalent 32 bit process.

Theoretically this allows garbage collection to run much less often, which should increase performance. It also helps with fragmentation as larger memory blocks are allocated at a time.

This article: https://devblogs.microsoft.com/dotnet/64-bit-vs-32-bit/ gives a more detailed explanation.

The higher unmanaged memory usage you are seeing is probably due to the fact that .NET objects running in 32 bit mode use a minimum of 12 bytes (8 bytes + 4 byte reference) while the same object in 64 bit would take 24 bytes (12 bytes + 8 byte reference).

Another article to explain this more completely: http://www.simple-talk.com/dotnet/.net-framework/object-overhead-the-hidden-.net-memory--allocation-cost/

Upvotes: 15

Aki Suihkonen
Aki Suihkonen

Reputation: 20027

The standard answer to memory issues with 64-bit systems is that most memory operations by default are aligned to 16 bytes. Memory reads to/from 128-bit XXM registers are expected to align with 16-byte boundaries. Two parameters in a stack take the same amount of memory as three (return address takes the missing 8 bytes). Gnu malloc aligns allocated areas to 16 byte boundaries.

If the size of the allocated units is small, then the overhead will be huge: first the overhead from aligning the data and then there's the overhead of aligning the bookkeeping associated to the data.

Also I'd predict that in 64-bit systems the data structures have evolved: instead of binary, or 2-3-4, balanced, splay or whatever trees it possibly makes sense to have radix 16 trees that can have a lot of slack but can be processed fast with SSE extensions that are guaranteed to be there.

Upvotes: 3

Peter
Peter

Reputation: 27944

I can't tell you exacly what is going on, but probally do a good guess. A 32bit process has different memory limitations than the 64bit process. The CLR will run the GC often in the 32bit process. You can see this by the spikes on you graph. However when you are running the 64bit process the GC will not be called till you are getting low on memory. This will depend on you total memory usage of your system.

In numbers your 32bit process can only allocate around 1gig, and you 64bit can allocate all your memory. In the 32bit process the GC will start cleaning up sooner, because your program will suffer performance when it uses to much memory. The CLR on the 64bit process will start cleaning up when your total system memory drops below a certain trashhold.

Upvotes: 2

Related Questions