coldsky
coldsky

Reputation: 95

How to extract matrixL() and matrixU() when using Eigen::CholmodSupernodalLLT?

I'm trying to use Eigen::CholmodSupernodalLLT for Cholesky decomposition, however, it seems that I could not get matrixL() and matrixU(). How can I extract matrixL() and matrixU() from Eigen::CholmodSupernodalLLT for future use?

Upvotes: 7

Views: 957

Answers (3)

mkln
mkln

Reputation: 14973

A partial answer to integrate what others have said.

Consider Y ~ MultivariateNormal(0, A). One may want to (1) evaluate the (log-)likelihood (a multivariate normal density), (2) sample from such density.

For (1), it is necessary to solve Ax = b where A is symmetric positive-definite, and compute its log-determinant. (2) requires L such that A = L * L.transpose() since Y ~ MultivariateNormal(0, A) can be found as Y = L u where u ~ MultivariateNormal(0, I).

A Cholesky LLT or LDLT decomposition is useful because chol(A) can be used for both purposes. Solving Ax=b is easy given the decomposition, andthe (log)determinant can be easily derived from the (sum)product of the (log-)components of D or the diagonal of L. By definition L can then be used for sampling.

So, in Eigen one can use:

  • Eigen::SimplicialLDLT solver(A) (or Eigen::SimplicialLLT), when solver.solve(b) and calculate the determinant using solver.vectorD().diag(). Useful because if A is a covariance matrix, then solver can be used for likelihood evaluations, and matrixL() for sampling.

  • Eigen::CholmodDecomposition does not give access to matrixL() or vectorD() but exposes .logDeterminant() to achieve the (1) goal but not (2).

  • Eigen::PardisoLDLT does not give access to matrixL() or vectorD() and does not expose a way to get the determinant.

In some applications, step (2) - sampling - can be done at a later stage so Eigen::CholmodDecomposition is enough. At least in my configuration, Eigen::CholmodDecomposition works 2 to 5 times faster than Eigen::SimplicialLDLT (I guess because of the permutations done under the hood to facilitate parallelization)

Example: in Bayesian spatial Gaussian process regression, the spatial random effects can be integrated out and do not need to be sampled. So MCMC can proceed swiftly with Eigen::CholmodDecomposition to achieve convergence for the uknown parameters. The spatial random effects can then be recovered in parallel using Eigen::SimplicialLDLT. Typically this is only a small part of the computations but having matrixL() directly from CholmodDecomposition would simplify them a bit.

Upvotes: 1

As reported somewhere else, e.g., it cannot be done easily. I am copying a possible recommendation (answered by Gael Guennebaud himself), even if somewhat old:

If you really need access to the factor to do your own cooking, then better use the built-in SimplicialL{D}LT<> class. Extracting the factors from the supernodal internal represations of Cholmod/Pardiso is indeed not straightforward and very rarely needed. We have to check, but if Cholmod/Pardiso provide routines to manipulate the factors, like applying it to a vector, then we could let matrix{L,U}() return a pseudo expression wrapping these routines.

Developing code for extracting this is likely beyond SO, and probably a topic for a feature request.

Of course, the solution with LLT is at hand (but not the topic of the OP).

Upvotes: 0

bartop
bartop

Reputation: 10315

You cannot do this using the given class. The class you are referencing is equotation solver (which indeed uses cholesky decomposition). To decompose your matrix you should rather use Eigen::LLT. Code example from their website:

MatrixXd A(3,3);
A << 4,-1,2, -1,6,0, 2,0,5;
LLT<MatrixXd> lltOfA(A);
MatrixXd L = lltOfA.matrixL(); 
MatrixXd U = lltOfA.matrixU(); 

Upvotes: 0

Related Questions