zazi
zazi

Reputation: 71

C++ array allocation doesn't really allocate

I have created the Virtual Ram class to boost the memory allocation during runtime.

class VRam
{
  unsigned char ***data;
...
public:
  VRam(ulli length);
  void *allocate(ulli size);
  ...
}

In my program small arrays or variables are allocated in the memory in groups, which allows me to allocate a huge array at the beginning and just use some parts of it later on then clear the whole thing. The problem is the following: I have recently updated it to support huge arrays by splitting them into smaller blocks. My program worked perfectly after this update, but even when I set the memory limit to 2 GiB (this amount of space is allocated at the beginning), the system monitor showed the it only used 50 MiB which is strange. After some testing:

ulli size = 2073741824;
    int blocknum = size / VBLOCK_SIZE;
    VRam *vr = new VRam(size);
    int **arrs = new int*[blocknum];
    for (int i = 0; i < blocknum; ++i) {
        //cout<<i<<endl;
        arrs[i] = (int*)vr->allocate(VBLOCK_SIZE);
        arrs[i][0] = i;     //<<<<<---------------
        for (int j = 0; j < VBLOCK_SIZE / 4; ++j) {
            //cout<<" "<<j<<endl;
            arrs[i][j] = j+i;    //<<<<-------------
        }
    }

I got the following results: If I construct everything on the allocated memory, it will really be allocated, (the second <<<----) and it uses the specified amount of space. If I don't use don't construct everything (nothing or the first couple of bytes: first <<<---), then the memory won't be used. I can specify 16 GiBs on my laptop with 8 GiG Ram and it is still working. My problem with this is that, as it seems it is some kind of memory optimization, but the reason I created this class is to boost the CPU using memory. Does this cause any problems in the speed? If so how can I get around it without writing to every byte in the memory? I am compiling on Ubuntu 16.04 LTS with qmake (Qt 5.5.1 GCC 5.2.1 20151129, 64 bit).

Upvotes: 2

Views: 409

Answers (3)

You need to understand the difference between virtual memory and physical memory. Physical memory is the actual RAM in your machine, it's reasonably fast (but much slower than cache memory or register access). What you are allocating is virtual memory: the ability to allocate more virtual memory than you have physical memory has been commonplace in pretty much every serious operating system for the last 30 or 40 years.

When you touch some virtual memory it will be brought into physical memory (either by copying it from the swap file, or, if this is the first time this virtual memory has been touched by just zeroing the corresponding physical memory). When you run out of physical memory, the virtual memory is copied to the swap file.

If you need to copy virtual memory to and from the swap file, this is going to be very slow. You want to avoid doing this too much.

Upvotes: 1

random passerby
random passerby

Reputation: 51

You may be interested in this document, which details how Linux handles memory overcommit.

TL;DR is that Linux will allow the process to allocate large amounts of memory but only actually consume it when written to.

Upvotes: 1

edmz
edmz

Reputation: 8492

It's because of the nature of the Linux memory allocator: it makes your process think it got access to the required amount of memory. However, the memory is truly assigned when your process actually touches it (i.e. by reading/writing it).

Quoting malloc(3):

By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available.

Upvotes: 1

Related Questions