lllllllllllll
lllllllllllll

Reputation: 9110

Memory occupation increase

I am trapped in a wired situation; my c++ code keeps consuming more memory (reaching around 70G), until the whole process got killed.

I am invoking a C++ code from Python, which implements the Longest common subsequence length algorithm.

The C++ code is shown below:

#define MAX(a,b) (((a)>(b))?(a):(b))

#include <stdio.h>

int LCSLength(long unsigned X[], long unsigned Y[], int m, int n)
{

  int** L = new int*[m+1];
  for(int i = 0; i < m+1; ++i)
    L[i] = new int[n+1];

  printf("i am hre\n");

  int i, j;
  for(i=0; i<=m; i++)
  {
    printf("i am hre1\n");
    for(j=0; j<=n; j++)
    {
        if(i==0 || j==0)
            L[i][j] = 0;
        else if(X[i-1]==Y[j-1])
            L[i][j] = L[i-1][j-1]+1;
        else
            L[i][j] = MAX(L[i-1][j],L[i][j-1]);
    }
  }
  int tt = L[m][n];

  printf("i am hre2\n");

  for (i = 0; i < m+1; i++)
    delete [] L[i];

  delete [] L;

  return tt;
}

And my Python code is like this:

from ctypes import cdll
import ctypes
lib = cdll.LoadLibrary('./liblcs.so')

la = 36840
lb = 833841
a = (ctypes.c_ulong * la)()
b = (ctypes.c_ulong * lb)()

for i in range(la):
    a[i] = 1
for i in range(lb):
    b[i] = 1

print "test"
lib._Z9LCSLengthPmS_ii(a, b, la, lb)

IMHO, in the C++ code, after the new operation which could allocate a large amount of memory on the heap, there would be not more additional memory consumption inside the loop.

However, to my surprise, I observed that the used memory keeps increasing during the loop. (I am using top on Linux, and it keeps print i am her1 before the process got killed)

It is really confused me at this point, as I guess after the memory allocation, there are only some arithmetic operations inside the loop, why does the code take more memory?

Am I clear enough? Could anyone give me some help on this issue? Thank you!

Upvotes: 1

Views: 99

Answers (2)

Harry
Harry

Reputation: 11648

Your consuming too much memory. The reason why the system does not die on allocation is because Linux allows you to allocate more memory than you can use

http://serverfault.com/questions/141988/avoid-linux-out-of-memory-application-teardown

I just did the same thing on a test machine. I was able to get past the uses of new and start the loop, only when the system decided that I was eating too much of the available RAM did it kill me.

This is what I got. A lovely OOM message in dmesg.

[287602.898843] Out of memory: Kill process 7476 (a.out) score 792 or sacrifice child
[287602.899900] Killed process 7476 (a.out) total-vm:2885212kB, anon-rss:907032kB, file-rss:0kB, shmem-rss:0kB

On Linux you would see something like this in your kernel logs or as the output from dmesg...

[287585.306678] Out of memory: Kill process 7469 (a.out) score 787 or sacrifice child
[287585.307759] Killed process 7469 (a.out) total-vm:2885208kB, anon-rss:906912kB, file-rss:4kB, shmem-rss:0kB
[287602.754624] a.out invoked oom-killer: gfp_mask=0x24201ca, order=0, oom_score_adj=0
[287602.755843] a.out cpuset=/ mems_allowed=0
[287602.756482] CPU: 0 PID: 7476 Comm: a.out Not tainted 4.5.0-x86_64-linode65 #2
[287602.757592] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014
[287602.759461]  0000000000000000 ffff88003d845780 ffffffff815abd27 0000000000000000
[287602.760689]  0000000000000282 ffff88003a377c58 ffffffff811d0e82 ffff8800397f8270
[287602.761915]  0000000000f7d192 000105902804d798 ffffffff81046a71 ffff88003d845780
[287602.763192] Call Trace:
[287602.763532]  [<ffffffff815abd27>] ? dump_stack+0x63/0x84
[287602.774614]  [<ffffffff811d0e82>] ? dump_header+0x59/0x1ed
[287602.775454]  [<ffffffff81046a71>] ? kvm_clock_read+0x1b/0x1d
[287602.776322]  [<ffffffff8112b046>] ? ktime_get+0x49/0x91
[287602.777127]  [<ffffffff81156c83>] ? delayacct_end+0x3b/0x60
[287602.777970]  [<ffffffff81187c11>] ? oom_kill_process+0xc0/0x367
[287602.778866]  [<ffffffff811882c5>] ? out_of_memory+0x3bf/0x406
[287602.779755]  [<ffffffff8118c646>] ? __alloc_pages_nodemask+0x8fc/0xa6b
[287602.780756]  [<ffffffff811c095d>] ? alloc_pages_current+0xbc/0xe0
[287602.781686]  [<ffffffff81186c1d>] ? filemap_fault+0x2d3/0x48b
[287602.782561]  [<ffffffff8128adea>] ? ext4_filemap_fault+0x37/0x51
[287602.783511]  [<ffffffff811a9d56>] ? __do_fault+0x68/0xb1
[287602.784310]  [<ffffffff811adcaa>] ? handle_mm_fault+0x6a4/0xd1b
[287602.785216]  [<ffffffff810496cd>] ? __do_page_fault+0x33d/0x398
[287602.786124]  [<ffffffff819c6ab8>] ? async_page_fault+0x28/0x30

Upvotes: 2

Trevor Hickey
Trevor Hickey

Reputation: 37834

Take a look at what you are doing:

#include <iostream>

int main(){

  int m = 36840;
  int n = 833841;
  unsigned long total = 0;

  total += (sizeof(int) * (m+1));

  for(int i = 0; i < m+1; ++i){
    total += (sizeof(int) * (n+1));
  }

  std::cout << total << '\n';

}

You're simply consuming too much memory.
If the size of your int is 4 bytes, you are allocating 122 GB.

Upvotes: 1

Related Questions