alex
alex

Reputation: 84

std::bad_alloc at transpose of Eigen::SparseMatrix

I'm trying to calculate the following:
A = X^t * X I'm using the Eigen::SparseMatrix and get a std::bad_alloc error on the transpose() operation:

Eigen::SparseMatrix<double> trans = sp.transpose();

sp is also a Eigen::SparseMatrix Matrix, but it is very big, on one of the smaller datasets, the commands

std::cout << "Rows: " << sp.rows() << std::endl;
std::cout << "Rows: " << sp.cols() << std::endl;

give the following result:

Rows: 2061565968
Cols: 600

(I precompute the sizes of this matrix before I start to fill it)

Is there a limit on how many entries such a matrix can hold? I'm using a 64bit Linux system with g++

Thanks in advance

Alex

Upvotes: 3

Views: 1029

Answers (3)

ggael
ggael

Reputation: 29205

By default Eigen::SparseMatrix uses int to stores sizes and indices (for compactness). However, with that huge amount of rows, you need to use 64 integers for both sp and sp.transpose():

typedef SparseMatrix<double, 0, std::ptrdiff_t> SpMat;

Note that you can directly write:

SpMat sp, sp2;
sp2 = sp.transpose() * sp;

even though sp.transpose() will have to be evaluated into a temporary anyway.

Upvotes: 2

alex
alex

Reputation: 84

The answer from ggael worked with a slight modification:

In the definition of the SparseMatrix one cannot ommit the options, so the correct typedef is

typedef SparseMatrix<double, 0, std::ptrdiff_t> SpMat;

The 0 can also be exchanged for a 1, 0 means column-major and 1 means RowMajor

Thank your for your help

Upvotes: 2

luk32
luk32

Reputation: 16070

I think it is impossible to answer your question in its current state.

There are two things. The size of the matrix - the mathematical object, and the size understood as memory it occupies. In dense matrices the are pretty much the same (linear dependence). But in sparse case the memory occupation is not tied to the size of the matrix, but to the number of non-zero elements.

So, technically, you have pretty much unlimited size constraints - equal to the Size type. However, you are, of course, still bound by memory when it comes to the number of (non-zero) elements.

You make a copy of a matrix obviously. So you could try calculating the size of the data the matrix object need to hold, and see if it fits within your memory.

This is not very trivial, but docs say that the storage is a list of non-zero elements. So a good estimate would probably be (2*sizeof(Index)+sizeof(Scalar))*sp.nonZeros() - for (x,y,value).

You could also monitor RAM usage before calling the transpose, and see if it stays within the limit if you double it.

Note: The transposition is probably not the culprit there, but operator=. Maybe you can avoid making the copy.

Upvotes: -1

Related Questions