Reputation: 487
I have a symmetric positive-definite matrix (e.g. Covariance matrix), and I want to calculate its inverse. In math, I know that it is more efficient to use Cholesky decomposition to invert the matrix, especially if your matrix is big. But I was not sure how does "numpy.lianlg.inv()" works. Say I have the following code:
import numpy as np
X = np.arange(10000).reshape(100,100)
X = X + X.T - np.diag(X.diagonal()) # symmetry
X = np.dot(X,X.T) # positive-definite
# simple inversion:
inverse1 = np.linalg.inv(X)
# Cholesky decomposition inversion:
c = np.linalg.inv(np.linalg.cholesky(X))
inverse2 = np.dot(c.T,c)
Which one is more efficient (inverse1 or inverse2)? If the second one is more efficient, why is numpy.linalg.inv() not using this instead?
Upvotes: 6
Views: 9478
Reputation: 97571
With the following setup:
import numpy as np
N = 100
X = np.linspace(0, 1, N*N).reshape(N, N)
X = 0.5*(X + X.T) + np.eye(N) * N
I get the following timings with IPython
's %timeit
:
In [28]: %timeit np.linalg.inv(X)
255 µs ± 30.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [29]: %timeit c = np.linalg.inv(np.linalg.cholesky(X)); np.dot(c.T,c)
414 µs ± 15.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Upvotes: 2