ericf
ericf

Reputation: 260

scipy.optimize with matrix constraint

How do I tell fmin_cobyla about a matrix constraint Ax-b >= 0? It won't take it as a vector constraint:

cons = lambda x: dot(A,x)-b

thanks.

Upvotes: 0

Views: 3188

Answers (2)

unutbu
unutbu

Reputation: 879083

Since the constraint must return a scalar value, you could dynamically define the scalar constraints like this:

constraints = []
for i in range(len(A)):
    def f(x, i = i):
        return np.dot(A[i],x)-b[i]
    constraints.append(f)

For example, if we lightly modify the example from the docs,

def objective(x):
    return x[0]*x[1]

A = np.array([(1,2),(3,4)])
b = np.array([1,1])
constraints = []
for i in range(len(A)):
    def f(x, i = i):
        return np.dot(A[i],x)-b[i]
    constraints.append(f)

def constr1(x):
    return 1 - (x[0]**2 + x[1]**2)

def constr2(x):
    return x[1]

x = optimize.fmin_cobyla(objective, [0.0, 0.1], constraints+[constr1, constr2],
                         rhoend = 1e-7)
print(x)

yields

[-0.6  0.8]

PS. Thanks to @seberg for pointing out an earlier mistake.

Upvotes: 4

seberg
seberg

Reputation: 8975

Actually the documentation says Constraint functions;, it simply expects a list of functions each returning only a single value.

So if you want to do it all in one, maybe just modify the plain python code of the fmin_cobyla, you will find there that it defines a wrapping function around your functions, so it is easy... And the python code is really very short anyways, just small wrapper around scipy.optimize._cobyal.minimize.

On a side note, if the function you are optimizing is linear (or quadratic) like your constraints, there are probably much better solvers out there.

Upvotes: 1

Related Questions