Reputation: 23
I have a problem with Scipy null_space. Take the following example:
A = np.array([[7,3,2],[3,9,4],[2,4,5]])
eigen = np.linalg.eig(A)
with output eigen =
(array([13.477, 5. , 2.523]),
array([[ 0.486, 0.873, -0.041],
[ 0.74 , -0.436, -0.511],
[ 0.464, -0.218, 0.858]]))
I have the eigenvalues and eigenvectors inside eigen tuple. Now if e
is an eigenvalue of A
(for example 13.477), obviously the nullspace of
A - e I
should not be empty, however:
null = la.null_space(A-eigen[0][0]*np.eye(3))
returns
array([], shape=(3, 0), dtype=complex128)
which should've been the eigenvector corresponding to eigen[0][0]
(Note that when I run the same code for eigen[0][1]
and eigen[0][2]
it correctly returns the eigenvectors we saw above).
To check this, I asked for eigenvalues and eigenvectors of (A-eI)
:
null_eigen = np.linalg.eig(A-eigen[0][0]*np.eye(3))
with output null_eigen =
(array([-1.243e-14, -8.477e+00, -1.095e+01]),
array([[ 0.486, 0.873, -0.041],
[ 0.74 , -0.436, -0.511],
[ 0.464, -0.218, 0.858]]))
clearly the first eigenvalue, which corresponds to the eigenvector of 13.477, is "almost" zero, but why is it that scipy.linalg.null_space did not pick it up?
Upvotes: 1
Views: 1366
Reputation: 36299
From the documentation of null_space
,
rcond
: Relative condition number. Singular values s smaller thanrcond * max(s)
are considered zero. Default:floating point eps * max(M,N)
.
and thus rcond
determines the effective null space. Floating point math is not exact and so for the eigenvalues this happens to slip above the threshold. Using a larger number for rcond
will give the expected result:
import numpy as np
from scipy.linalg import null_space
A = np.array([[7, 3, 2],
[3, 9, 4],
[2, 4, 5]])
eigen = np.linalg.eig(A)
print(eigen[1][:, 0])
print(null_space(A - eigen[0][0]*np.eye(3), rcond=1e-14))
with the output:
[0.48622704 0.74041411 0.46407996]
[[-0.48622704]
[-0.74041411]
[-0.46407996]]
For more details you can also take a look at the source code.
Upvotes: 2