Reputation: 615
I have written a simple Python code that reads arrays from a file. All the array elements contain basic algebraic operations (e.g. y*w+2*(w+y)
). These are read in, and then all the array elements are evaluated using the eval
command (I know the security issues with using eval
, but this script will only ever be used by me, and I hopefully won't do something stupid).
These arrays are then used to solve a general eigenvalue problem and eigenvalue and vectors are obtained. My question is how do I minimise (optimise) a particular eigenvalue by varying certain parameters within the matrix? Here is a basic example:
import numpy as np
from scipy.linalg import eig
y=10
w=3.45
x = np.array([[y+w,-2],[-2,w-2*(w+y)]])
eigenVal,eigenVec=eig(x)
print eigenVal[0]
The matrix is not the best example, but is just a minimal example. The idea is to vary the two parameters y and w to minimise a particular eigenvalue (e.g. the first eigenvalue, eigenVal[0]
). I have written some basic optimisation algorithms in C++
before, but wanted to explore the Python optimisaton library. These matrices can get very large, so what is the best way to go about this problem?
Upvotes: 0
Views: 674
Reputation: 8829
You want to use scipy.optimize.minimize
.
import numpy as np
from scipy.linalg import eig
from scipy.optimize import minimize
def my_func(x):
y, w = x
arr = np.array([[y+w,-2],[-2,w-2*(w+y)]])
ev, ew=eig(arr)
return ev[0]
x0 = np.array([10, 3.45]) # Initial guess
minimize(my_func, x0)
# ComplexWarning: Casting complex values to real discards the imaginary part
# fun: -127806953.0230245
# hess_inv: array([[1, 0],
# [0, 1]])
# jac: array([ 0., 0.])
# message: 'Optimization terminated successfully.'
# nfev: 60
# nit: 1
# njev: 15
# status: 0
# success: True
# x: array([-63809761.27752077, -63997191.74550374])
This ComplexError
is worrying; you should use the bounds
parameter or the constraints
parameter. I don't know the constraints for your problem, so I'm skipping that part. A decent constraint would be that the matrix should be totally positive, which guarantees positive, real eigenvalues. But that's not possible, given that you have a constant -2
in your matrix.
You can access the result of the optimization as:
a = minimize(my_func, x0)
print(a.x)
# array([-63809761.27752077, -63997191.74550374])
Upvotes: 1