covstat
covstat

Reputation: 331

Armadillo: efficient matrix allocation on the heap

I'm using Armadillo to manipulate large matrices in C++ read from a CSV-file.

mat X;
X.load("myfile.csv",csv_ascii);
colvec x1 = X(span::all,0);
colvec x2 = X(span::all,1);
//etc.

So x1,...,xk (for k=20 say) are the columns of X. X will typically have rows ranging from 2000 to 16000. My question is:

How can I allocate (and subsequently deallocate) X onto the heap (free store)?

This section of Armadillo docs explains auxiliary memory allocation of a mat. Is this the same as heap allocation? It requires prior knowledge of matrix dimensions, which I won't know until X is read from csv:

mat(aux_mem*, n_rows, n_cols, copy_aux_mem = true, strict = true) 

Any suggestions would be greatly appreciated. (I'm using g++-4.2.1; my current program runs fine locally on my Macbook Pro, but when I run it on my university's computing cluster (Linux g++-4.1.2), I run into a segmentation fault. The program is too large to post).

Edit: I ended up doing this:

arma::u32 Z_rows = 10000;
arma::u32 Z_cols = 20;
double* aux_mem = new double[Z_rows*Z_cols];
mat Z(aux_mem,Z_rows,Z_cols,false,true);
Z = randn(Z_rows, Z_cols);

which first allocates memory on the heap and then tells the matrix Z to use it.

Upvotes: 3

Views: 3134

Answers (1)

mtall
mtall

Reputation: 3620

By looking at the source code, Armadillo already allocates large matrices on the heap.

To reduce the amount of memory required, you may want to use fmat instead of mat. This will come with the trade-off of reduced precision.

fmat uses float, while mat uses double: see http://arma.sourceforge.net/docs.html#Mat.

It's also possible that the system administrator of the linux computing cluster has enabled limits on it (eg. each user can allocate only upto a certain amount of maximum memory). For example, see http://www.linuxhowtos.org/Tips%20and%20Tricks/ulimit.htm.

Upvotes: 2

Related Questions