Reputation: 135
I'm using scipy.sparse.linalg.eigsh()
to solve the generalized eigenvalue problem. I wanna use eigsh()
because I'm manipulating some large sparse matrix. The problem is I cannot get the right answers and the eigenvalues and eigenvectors output from eigsh()
are totally different from what I've got from Matlab's eigs()
.
It looks like this: data:
a:
304.7179 103.1667 36.9583 61.3478 11.5724
35.5242 111.4789 -9.8928 8.2586 -4.7405
10.8358 4.3433 145.6586 26.5153 13.1871
-1.1924 -2.5430 0.4322 43.1886 -0.6098
-18.7751 -8.8031 -4.3962 -5.8791 17.6588
b:
736.9822 615.7946 587.6828 595.7169 545.1878
615.7946 678.2142 575.7579 587.3469 524.7201
587.6828 575.7579 698.6223 593.5402 534.3675
595.7169 587.3469 593.5402 646.0410 530.1114
545.1878 524.7201 534.3675 530.1114 590.1373
in python: a,b are numpy.ndarray
In [11]: import scipy.sparse.linalg as lg
In [14]: x,y=lg.eigsh(a,M=b,k=2,which='SM')
In [15]: x
Out[15]: array([ 0.01456738, 0.22578463])
In [16]: y
Out[16]:
array([[ 0.00052614, 0.00807034],
[ 0.00514091, -0.01593113],
[ 0.00233622, -0.00429671],
[ 0.01877451, -0.06259276],
[ 0.01491696, 0.08002341]])
In [18]: a.dot(y[:,0])-x[0]*b.dot(y[:,0])
Out[18]: array([ 1.74827445, 0.30325634, 0.71299604, 0.42842245, -0.24724681])
In [19]: a.dot(y[:,1])-x[1]*b.dot(y[:,1])
Out[19]: array([-2.2463206 , -1.64704567, -0.80086734, -1.56796329, 0.03027861])
It could be seen that the eigenvalues and eigenvectors are not good enough to recomposing the original matrix.
However, in MATLAB it works well:
[y,x] = eigs(a,b,2,'sm');
y =
0.0037 -0.0141
-0.0056 0.0151
0.0015 0.0079
-0.0117 0.0666
-0.0298 -0.0753
x =
0.0202 0
0 0.3499
a*x(:,1)-y(1,1)*b*x(:,1)
ans =
1.0e-14 *
-0.3775
0.0777
0.0777
0.0555
0.0666
Plus, data b is positive definite:
In [24]: np.linalg.eigvals(b)
Out[24]:
array([ 2951.07297125, 137.81545217, 90.40223937, 107.04818229,
63.65818086])
Anybody could explain why I cannot get the right answer in python?
Using lg.eigs()
we do get the same outputs as in MATLAB.
But...problem occurs when matrix becomes large like this:
in MATLAB we've got things like this:
>> [x,y] = eigs(A,B,4,'sm');
y =
0.0001 0 0 0
0 0.0543 0 0
0 0 0.1177 0
0 0 0 0.1350
while in python(python3.5.2,scipy1.0.0) using lg.eigs(A,M=B,k=4,which='SM')
it results in eigenvalues as:
array([ 4.43277284e+51 +0.00000000e+00j,
1.04797857e+48 +8.30096152e+47j,
1.04797857e+48 -8.30096152e+47j, -1.45582240e+31 +0.00000000e+00j])
Upvotes: 2
Views: 4181
Reputation:
As Paul Panzer said, "h" in "eigsh" stands for Hermitian, which your matrix A is not. (Also, having positive eigenvalues does not imply being positive definite; this is only true if the matrix is Hermitian to begin with.) The method eigsh
does not check the input for being Hermitian; it just follows a process assuming it is; so the output is incorrect when the assumption fails.
Using eigs method produces the same results as Matlab:
x, y = lg.eigs(a,M=b,k=2,which='SM')
np.real(x), np.real(y) # x and y have tiny imaginary parts due to float math errors
(array([ 0.02022333, 0.34993346]),
array([[-0.00368007, -0.0140898 ],
[ 0.0056435 , 0.01509067],
[-0.00154725, 0.00790518],
[ 0.01170563, 0.06664118],
[ 0.02981777, -0.07528778]]))
Of course, eigs
takes a lot longer to run than eigsh
.
Your second example is a 34 by 34 dense matrix, it has no zeros at all. Using sparse linear algebra on it is not reasonable; and there is a warning saying that the method did not converge. The regular linear algebra module works fine.
import scipy.linalg as la
sorted_eigenvals = np.sort(np.real(la.eigvals(Am, Bm)))
This returns
5.90947734e-05, 5.42521180e-02, 1.17669899e-01, 1.34952286e-01, ...
in agreement with MATLAB's output that you quoted (except Matlab rounds the numbers)
0.0001, 0.0543, 0.1177, 0.1350
Upvotes: 2