CoderDake
CoderDake

Reputation: 1547

Strange malloc behavior won't allow more that 2GB memory allocation on a 64 bit process

This question concerns a program that I am developping for.

I am working on a project that requires that sets of rows or a row larger than 2GB aren't sent over the network (the network can't send data in groups larger than 2GB). I have made all of the proper changes to the code so it won't send this/these group(s) but now I am trying to build test cases.

I have already built a test that creates just less than 1 billion rows that occupy more than 2 GB's. The program properly filters out these group of rows before they are sent over the network.

The problem I am running into is that I need to create a single row that will hold a column with a single string or a collection of columns, inside that row, that hold strings, where the size of this row is larger than 2GB. But When the string(s) start to occupy close to 2GB, malloc returns NULL.

I did some research and found out that it is probably the fact that I don't have enough contiguous memory, so I started adding more columns with smaller strings. I have gone as far as breaking up the 2GB string across 64 columns so that it won't have to allocate as much all at once. I am still running into the same problem though and I am getting suspicious that I am overlooking something.

It is a 64 process on a 64 bit Windows 7 system. 8GB of ram. (But I have also tested it on a 64 bit red hat machine with 24GB RAM)

Does anyone have any insight into why the System won't allocate the program memory as it approaches 2GB?

P.S. I also looked into the memory each process can alloc on a 64 bit system and it was over 100TB. Considering it is so much, the fact that I can't allocate as I approach 2GB really confuses me.

Upvotes: 3

Views: 2199

Answers (1)

CoderDake
CoderDake

Reputation: 1547

After much exploring of the huge group of code that I was having this problem with, I noticed that the size being passed to the calloc(uint_64) was being calculated by a function that returned a signed integer. Since this number was being overflowed, when it was cast by the compiler to uint_64, the largest bit was set. Which of course resulted in the calloc trying to allocate a lot of memory.

There is of course a couple of solutions that are possible:

  1. change the return type of the size function to uint_32 (This would have been too large of a change for my code base, and time limits)
  2. cast the result of the size function to uint_32 before passing it to calloc (the option I chose, to bypass that large allocation temporarily)

I hope this helps someone else eventually,

Dan

Upvotes: 2

Related Questions