Erika L
Erika L

Reputation: 307

scipy.optimize.minimize ignores constraint

I'm trying to minimize a linear function over one thousand variables. The constraints are: (w is numpy array, with element type float64)

cons = ({'type': 'ineq', 'fun': lambda w: 0.01 - abs(np.sum(w))},
        {'type': 'ineq', 'fun': lambda w: 1 - abs(np.sum(vMax0(w)))},
        {'type': 'ineq', 'fun': lambda w: 1 - abs(np.sum(vMin0(w)))})

where vMax0 and vMin0 are just vectorized function max(x,0) and min(x,0). The optimization statement is:

    w_raw = minimize(totalRisk, w0, bounds = wBounds, constraints = cons, 
                     method='SLSQP', options={'disp': True})

But the optimal parameters are not even in the feasible region. Actually, the optimal parameters get out of feasible region after 1 or 2 iterations. What might be the possible reason for this? Thanks!

Upvotes: 3

Views: 2300

Answers (1)

ryanpattison
ryanpattison

Reputation: 6251

The first constraint for sum makes -0.01 <= sum(w) <= 0.01 which is not "close to 1".

cons = ({'type': 'ineq', 'fun': lambda w: 0.01 - abs(1 - np.sum(w))},
    {'type': 'ineq', 'fun': lambda w: 1 - abs(np.sum(vMax0(w)))},
    {'type': 'ineq', 'fun': lambda w: 1 - abs(np.sum(vMin0(w)))})

Now the absolute difference of the sum to one is no greater than 0.01 :)

Upvotes: 1

Related Questions