Reputation: 269
I am trying to get the LU composition of a matrix. Since it is suggested that scipy.linalg.lu might not be too accurate, I decided to make sure the maximum absolute error is way less than the minimum element of the matrix I am working with.
Here is a code snippet where I try to LU decompose transpose of A:
print min([min(r[np.nonzero(r)], key=abs) for r in A], key=abs), "min"
P, L, U = scipy.linalg.lu(np.transpose(A))
B=np.matmul(np.matmul(P,L),U)
diff=np.absolute(B-np.transpose(A))
self.error.append([C, max([max(r) for r in diff])])
print max([max(r) for r in diff]), "max"
The output is as follows:
1.8 min
3.552713678800501e-15 max
which is perfect for me. However, there is a problem.
$$ PA^T=LU $$
$$ A^T= P^{-1}LU $$
Yet, in the third line of the code snippet, I am not multiplying with $P^{-1}$, I multiply it with $P$. For some reason it works perfectly, while it gives me huge errors when I replace P with $P^{-1}$.
Is there something I don't know about np.matmul or scipy.linalg.lu? Does scipy.linalg.lu return P or P^{-1}? Or is there a mistake in my code?
Apologies if this post is not upto stackoverflow standards.
Upvotes: 0
Views: 431
Reputation: 281748
From the docs:
The decomposition is:
A = P L U
It is entirely expected that multiplying the P, L, and U matrices should produce something close to the array originally passed to scipy.linalg.lu
. You are not supposed to invert P.
Upvotes: 2