Reputation: 53
hi im trying to minimize a simple 3 variable function with some range costraints in the x variables .. but im getting 'Inequality constraints incompatible - any idea ? thanks !!
from scipy.optimize import minimize
def f(x):
return (int(558*x[0]*x[1]*x[2])-(x[2]*(558-int(558*x[0])))-(x[2]*558))
x0 = [0.4, 1.0, 2.0]
#real data Ranges
#x[0] 0..1
#x[1] 1..3
#x[2] 5..50
cons=(
{'type': 'ineq','fun': lambda x: x[0]},
{'type': 'ineq','fun': lambda x: 1-x[0]},
{'type': 'ineq','fun': lambda x: x[1]-1},
{'type': 'ineq','fun': lambda x: 3-x[1]},
{'type': 'ineq','fun': lambda x: x[2]-5},
{'type': 'ineq','fun': lambda x: 50-x[2]}
)
res = minimize(f, x0, constraints=cons)
print(res)
full result is
fun: -33490.99993615066
jac: array([ 6.7108864e+07, 6.7108864e+07, -8.9300000e+02])
message: 'Inequality constraints incompatible'
nfev: 8
nit: 2
njev: 2
status: 4
success: False
x: array([ 0.4 , 1. , 49.99999993])
Upvotes: 0
Views: 4422
Reputation: 169
Much later! I ran in a similar problem and found a solution for the SLSQP algorithm (that scipy.optimize.minimize uses by default) to converge: scale the function to minimize to its output is in the same range as input. The error message of SLSLQ 'Inequality constraints incompatible' is not very helpful to hint at that...
Interestingly however, in the case provided in question, SLSLQP answer nowhere near as good as COBYLA answer:
And SLSQP tho it converges, is dependant on scaling factor.
from scipy.optimize import minimize
x0 = [0.4, 1.0, 2.0]
#real data Ranges
#x[0] 0..1
#x[1] 1..3
#x[2] 5..50
cons=(
{'type': 'ineq','fun': lambda x: x[0]},
{'type': 'ineq','fun': lambda x: 1-x[0]},
{'type': 'ineq','fun': lambda x: x[1]-1},
{'type': 'ineq','fun': lambda x: 3-x[1]},
{'type': 'ineq','fun': lambda x: x[2]-5},
{'type': 'ineq','fun': lambda x: 50-x[2]}
)
scaling_coeffs = [0.1, 1, 10, 100, 1000, 10000]
for scaling_coeff in scaling_coeffs:
def f(x):
fx = (int(558*x[0]*x[1]*x[2])-(x[2]*(558-int(558*x[0])))-(x[2]*558))
return fx/scaling_coeff
res1 = minimize(f, x0, constraints=cons, method='SLSQP')
res2 = minimize(f, x0, constraints=cons, method='cobyla')
fx1 = res1.fun * scaling_coeff if res1.success else "failed"
fx2 = res2.fun * scaling_coeff if res2.success else "failed"
print(scaling_coeff, fx1, fx2)
# coeff SLSQP COBYLA
# 0.1 failed -55800.0
# 1 failed -55800.0
# 10 -33490.0 -55800.0
# 100 -33490.0 -55800.0
# 1000 -34487.0 -55800.0
# 10000 -6025.96 -55800.0
Note: I know that this answer is worse in that case than accepted answer. However I post it because I ran in the same problem and could solve it this way with good result in my case. I had bounds and COBYLA does not accept them (even if you can pass them as constraints, see Does scipy's minimize function with method "COBYLA" accept bounds?)
Upvotes: 0
Reputation: 56
Hello I suspect that the problem comes from the numerical method used.
By default with constraints, minimize
use Sequential Least Squares Programming (SLSQP) which is a gradient method. Therefore the function to be minimized has to be regular, but given your use of int
it is not.
Using the alternative method: Constrained Optimization BY Linear Approximation (COBYLA) which is of a different nature I get the following
from scipy.optimize import minimize
def f(x):
return (int(558*x[0]*x[1]*x[2])-(x[2]*(558-int(558*x[0])))-(x[2]*558))
x0 = [0.4, 1.0, 2.0]
#real data Ranges
#x[0] 0..1
#x[1] 1..3
#x[2] 5..50
cons=(
{'type': 'ineq','fun': lambda x: x[0]},
{'type': 'ineq','fun': lambda x: 1-x[0]},
{'type': 'ineq','fun': lambda x: x[1]-1},
{'type': 'ineq','fun': lambda x: 3-x[1]},
{'type': 'ineq','fun': lambda x: x[2]-5},
{'type': 'ineq','fun': lambda x: 50-x[2]}
)
res = minimize(f, x0, constraints=cons, method="cobyla")
print(res)
with the display
fun: -55800.0
maxcv: 7.395570986446986e-32
message: 'Optimization terminated successfully.'
nfev: 82
status: 1
success: True
x: array([-7.39557099e-32, 1.93750000e+00, 5.00000000e+01])
Upvotes: 2