python_freak
python_freak

Reputation: 259

rounding errors in numpy.linalg.eig() and scipy.linalg.eig()

Is there a way to improve the precision of the output of numpy.linalg.eig() and scipy.linalg.eig()?

I'm diagonalizing a non-symmetric matrix, yet I expect on physical grounds to get a real spectrum of pairs of positive and negative eigenvalues. Indeed, the eigenvalues do come in pairs, and I have verified by an independent analytical calculation that two of the pairs are correct. The problematic pair is the one with eigenvalues close to zero, which appear to have small imaginary parts. I am expecting this pair to be degenerate at zero, so the imaginary parts can at most be at machine precision, but they are much larger. I think this leads to a small error in the eigenvectors, which however can propagate in subsequent manipulations.

The example below shows that there are fictitious imaginary parts leftovers, by checking the validity of the transformation.

import numpy as np 
import scipy.linalg as sla

H = np.array(
     [[ 11.52,  -1.,    -1.,     9.52,   0.,     0.   ],
      [ -1.,    11.52,  -1.,     0.,     9.52,   0.,  ],
      [ -1.,    -1.,    11.52,   0.,     0.,     9.52,],
      [ -9.52,   0.,     0.,   -11.52,   1.,     1.,  ],
      [  0.,    -9.52,   0.,     1.,   -11.52,   1.,  ],
      [  0.,     0.,    -9.52,   1.,     1.,   -11.52 ]],
     dtype=np.float64
            )

#print(H)
E,V = np.linalg.eig(H)
#E,V = sla.eig(H)
H2=reduce(np.dot,[V,np.diag(E),np.linalg.inv(V)])
#print(H2)
print(np.linalg.norm(H-H2))

which gives

3.93435308362e-09

a number of the order of the fictitious imaginary part of the zero eigenvalues.

Upvotes: 3

Views: 3520

Answers (1)

Ryan Walker
Ryan Walker

Reputation: 3286

You may be losing some precision by taking the inverse in computing the error above. If instead you compute:

 # H = V.E.inv(V) <=> H.V = V.E
print(np.linalg.norm(H.dot(V)-V.dot(np.diag(E))))
# error: 2.81034671113e-14

the error is much smaller.

Your problem may also be suffering from ill-conditioning, meaning that there will be a very high numerical sensitivity to rounding and other errors. The Bauer-Fike Theorem gives an upper-bound on the error sensitivity of the eigenvalue problem.

From this theorem, in the worse circumstances an error at machine precision in the input domain could propagate to an error on the order of 1e-8 in the eigenvalues since:

machine_precison = np.finfo(np.float64).eps
print(np.linalg.cond(V)*(machine_precison))
# 4.54517272701e-08

Upvotes: 2

Related Questions