DaveP
DaveP

Reputation: 7102

How to efficiently get eigenvector decomposition with scipy or lapack?

I want to find an eigenvector decomposition of a dense complex matrix A

A = V.diag(lambda).V^-1

I only need a small number of the eigenvectors to reproduce the matrix accurately for my needs, however I need to perform some non-trivial filtering on the eigenvalues to determine which eigenvectors to include. For my problem it is not appropriate to use a singular value decomposition, as the eigenvalues/eigenvectors are the physically meaningful results I want to work with.

I am using scipy.linalg.eig, which is just a convenient wrapper around lapack's ZGEEV routine. Mathematically, V^-1 should be obtainable from an appropriately scaled version of the left eigenvectors. I would have expected this to be more efficient and more stable than inverting V, the matrix of right eigenvectors

However, when I compare the two approaches, it seems that using both the left eigenvectors in the decomposed matrix is much less accurate than using the inverted right eigenvectors. Presumably this is some kind of round-off or truncation error. I would prefer not to invert the matrix of right eigenvectors, as it is potentially quite large (of the order of 1000s) and I would need to repeat this operation many times.

Is there some routine available in scipy (or lapack or some other routine which I might be able to wrap myself) which efficiently and accurately gives the decomposition?

Upvotes: 2

Views: 1298

Answers (2)

mszep
mszep

Reputation: 420

If I understand correctly, you want to reobtain (an approximation to) A by performing the multiplication, but the routine only returns V itself, and you don't want to have to invert V.

However, the routine also returns the matrix of left eigenvectors U, and it is possible (as you note in the comment to my original answer) to obtain the inverse of V from U as follows:

If we evaulate the ij th element of the matrix U*A*V, then it must equal both

U_i * lambda_j * V_j

and

U_i * lambda_i * V_j

Where U_i is the i th row of U, and V_j is the j th column of V (note however that lapack routines like zgeev return the row vectors as Fortran column vectors).

It follows that the matrix U*V is diagonal, and so, as you say in your own answer, you can construct V^-1 by doing

V^-1_i = U_i / (U_i * V_i)

where the subscripts on U and V^-1 denote rows and those on V denote columns.

Upvotes: 0

DaveP
DaveP

Reputation: 7102

I have now realised that it is possible to get V^-1 from the left eigenvectors U, however the trick is to normalise the elements of U to the corresponding elements of V. I was normalising the columns of U to themselves, which is incorrect, but close enough that it looks nearly right.

So I am correct that you can get V^-1 from U, it was just a mistake in my implementation.

Upvotes: 1

Related Questions