ashnair1
ashnair1

Reputation: 365

Efficient solving of generalised eigenvalue problems in python

Given an eigenvalue problem Ax = λBx what is the more efficient way to solve it out of the two shown here:

import scipy as sp
import numpy as np

def geneivprob(A,B):
    # Use scipy
    lamda, eigvec = sp.linalg.eig(A, B)
    return lamda, eigvec

def geneivprob2(A,B):
    # Reduce the problem to a standard symmetric eigenvalue problem
    Linv = np.linalg.inv(np.linalg.cholesky(B))
    C = Linv @ A @ Linv.transpose()
    #C = np.asmatrix((C + C.transpose())*0.5,np.float32)
    lamda,V = np.linalg.eig(C)
    return lamda, Linv.transpose() @ V   

I saw the second version in a codebase and was wondering if it was better than simply using scipy.

Upvotes: 1

Views: 391

Answers (1)

Bob
Bob

Reputation: 14654

Well there is no obvious advantage in using the second approach, maybe for some class of matrices it will be better, I would suggest you to test with the problems you want to solve. Since you are transforming the eigenvectors, this will also transform how the errors affect the solution, and maybe that is the reason for using this second method, not efficiency, but numerical accuracy, or convergence.

Another thing is that the second method will only work for symmetric B.

Upvotes: 2

Related Questions