Reputation: 2085
Right now I have the following function to evaluate the Gaussian density:
double densities::evalMultivNorm(const Eigen::VectorXd &x, const Eigen::VectorXd &meanVec, const Eigen::MatrixXd &covMat)
{
double inv_sqrt_2pi = 0.3989422804014327;
double quadform = (x - meanVec).transpose() * covMat.inverse() * (x-meanVec);
double normConst = pow(inv_sqrt_2pi, covMat.rows()) * pow(covMat.determinant(), -.5);
return normConst * exp(-.5* quadform);
}
This is just transcribing the formula. However I get a lot of 0, nans and infs. I suspect it's coming from the covMat.determinant()
portion being very close to zero.
I have heard that it is more "stable" to premultiply x-meanVec
by the inverse of a "square root" of its covariance matrix. Statistically this gives you a random vector that is mean zero and has the identity matrix as its covariance matrix. My questions are:
Upvotes: 1
Views: 3227
Reputation: 18827
Ad 1: "Depends". E.g., if your covariance matrix has a special structure which makes it easy to calculate its inverse or if the dimension is very small, it can be faster and more stable to actually calculate the inverse.
Ad 2: Usually, Cholesky decomposition does the job. If your covariance is truly positive definite (i.e., not to close to a semidefinite matrix), decompose covMat = L*L^T
and compute squaredNorm(L\(x-mu))
(where x=A\b
means "Solve A*x=b
for x
"). Of course, if your covariance is fixed, you should compute L
only once (and perhaps invert it as well). You should use L
to compute sqrt(covMat.determinant())
as well, since computing the determinant would otherwise require to decompose covMat
again.
Minor improvement: instead of pow(inv_sqrt_2pi, covMat.rows())
compute logSqrt2Pi=log(sqrt(2*pi))
and then return exp(-0.5*quadform - covMat.rows()*logSqrt2Pi) / L.determinant()
.
Ad 3: This should run in Eigen 3.2 or later:
double foo(const Eigen::VectorXd &x, const Eigen::VectorXd &meanVec, const Eigen::MatrixXd &covMat)
{
// avoid magic numbers in your code. Compilers will be able to compute this at compile time:
const double logSqrt2Pi = 0.5*std::log(2*M_PI);
typedef Eigen::LLT<Eigen::MatrixXd> Chol;
Chol chol(covMat);
// Handle non positive definite covariance somehow:
if(chol.info()!=Eigen::Success) throw "decomposition failed!";
const Chol::Traits::MatrixL& L = chol.matrixL();
double quadform = (L.solve(x - meanVec)).squaredNorm();
return std::exp(-x.rows()*logSqrt2Pi - 0.5*quadform) / L.determinant();
}
Upvotes: 3