Reputation: 3954
I am working through the 5th ed of Linear Programming by Saul I. Gass.
He gives the following text and example: "Given an n x n nonsingular matrix A, then A can be expressed as the product A = LU... ...if we let the diagonal elements of U (or L) be all equal to 1, then the LU decomposition will be unique..."
I managed to get a reversible lower-upper decomposition with this code that I found from this SO question: Is there a built-in/easy LDU decomposition method in Numpy?
But I still can't tell what is going on and why L and U are so different from my textbook. Can anybody explain this to me?
So this code:
import numpy as np
import scipy.linalg as la
a = np.array([[1, 1, -1],
[-2, 1, 1],
[1, 1, 1]])
(P, L, U) = la.lu(a)
print(P)
print(L)
print(U)
D = np.diag(np.diag(U)) # D is just the diagonal of U
U /= np.diag(U)[:, None] # Normalize rows of U
print(P.dot(L.dot(D.dot(U)))) # Check
gives this output:
[[ 0. 1. 0.]
[ 1. 0. 0.]
[ 0. 0. 1.]]
[[ 1. 0. 0. ]
[-0.5 1. 0. ]
[-0.5 1. 1. ]]
[[-2. 1. 1. ]
[ 0. 1.5 -0.5]
[ 0. 0. 2. ]]
[[ 1. 1. -1.]
[-2. 1. 1.]
[ 1. 1. 1.]]
Upvotes: 3
Views: 5580
Reputation: 23637
There is a choice which matrix (L or U) should have ones on the diagonal. The textbook example chose U but scipy's implementation chose L. This explains the difference.
To illustrate the point, we can turn things around:
(P, L, U) = la.lu(a.T)
print(P.T)
# [[ 1. 0. 0.]
# [ 0. 1. 0.]
# [ 0. 0. 1.]]
print(L.T)
# [[ 1. 1. -1. ]
# [ 0. 1. -0.33333333]
# [ 0. 0. 1. ]]
print(U.T)
# [[ 1. 0. 0.]
# [-2. 3. 0.]
# [ 1. 0. 2.]]
By transposing the matrix we basically swapped U and L so that the other matrix gets ones on the diagonal. And, voila, the result is the same as in the textbook.
(Note that if the permutation matrix P was no identity matrix the results would look a little different.)
Upvotes: 5