Reputation: 310
I have a constrained optimization problem where the objective function is convex under some inequality constraints over the input vector. The only issue is that SLSQP claims that the constraints are incompatible, which is untrue. I've checked both inequality constraints using the initial vector and they're both satisfied.
I understand that my objective function may be hard to optimize, but I thought that including the Jacobians for everything would help with this problem. It seems like there should be some step-size parameter I could tune in the the solver to fix this, but varying the tolerances for SLSQP didn't seem to fix the problem.
Below is a stripped down version of the code for one instance of the problem (in general many of these variables may change, including the dimension of the vector I am optimizing over), as well as the output it returns
import numpy as np
from scipy.optimize import minimize
###############################
# Problem Setup
###############################
omega = 2*np.pi/(1E-6)
r = 3.26
T = 1E-6/r
omega_s = 2*np.pi/T
yf = 120.E-6
analytic_inf = 0.7611241494308488
analytic_inf_v = 43.30005226677565
n_vec = np.arange(1,4)
normalizing_vec = 1-n_vec**2*r**2
fqc_coeff = 7.422483151210897
fqc_mat = n_vec*omega_s/(1-(n_vec*r)**2)
fxd_mat = omega_s*n_vec
epsilon_coeff = 0.019552534524785916
delta_coeff = 0.2015101389747311
alpha = 160.96016835831352
###############################
# Cost Function, Jacobians & Constraints
###############################
gaussian = lambda a: np.exp(-(fqc_coeff*(fqc_mat.dot(a)+omega_s/(2*np.pi))**2))
costa = lambda a: alpha*(1- gaussian(a))
costd = lambda a: delta_coeff*1/2*np.sum(a**2)
coste = lambda a: epsilon_coeff*1/2*np.sum((n_vec*2*np.pi*a)**2)
cost_f = lambda a: costa(a) + costd(a) + coste(a)
# Constraints
eqa = lambda a: (fqc_mat.dot(a) + omega_s/(2*np.pi))
ineqa = lambda a: eqa(a)*fqc_coeff + 1/np.sqrt(2)
jaca = lambda a: fqc_coeff*fqc_mat
ineqb = lambda a: -eqa(a)*fqc_coeff+ 1/np.sqrt(2)
jacb = lambda a: -fqc_coeff*fqc_mat
cons = [{'type':'ineq','fun':ineqa,'jac':jaca},
{'type':'ineq','fun':ineqb,'jac':jacb}]
# Jacobian, if solver uses it
jacobian = lambda a: alpha*2*fqc_coeff**2*gaussian(a)*fqc_mat.dot(a)*fqc_mat + (
delta_coeff*a) + (
epsilon_coeff*(2*np.pi*n_vec)**2*a)
x0 = np.array([1.17120635, 0.54328102, 0.35740402])
result = minimize(cost_f, x0, method='SLSQP', jac=jacobian,
constraints=cons, options={'disp':True,'maxiter':1001,'ftol':1E-5})
print(ineqa(result.x))
print(ineqb(result.x))
And the output
Inequality constraints incompatible (Exit mode 4)
Current function value: 1.6377594465810514
Iterations: 1
Function evaluations: 1
Gradient evaluations: 1
0.7431425286510408
0.6710710337220541
Upvotes: 6
Views: 2268
Reputation: 703
Looking at the fortran code in the scipy repository
IF (mode.EQ.6) THEN
IF (n.EQ.meq) THEN
mode = 4
ENDIF
ENDIF
And the comment 6: SINGULAR MATRIX C IN LSQ SUBPROBLEM
seems there may be an issue with the LSQ subproblem. I'm no expert in this area but you can take a look at section 2.2.3 of the seminal paper "A software package for sequential quadratic programming." by Dieter Kraft to see why a singularity is occurring.
Upvotes: 0