Reputation: 626
I have compared many Quadratic Programming(QP) solvers like cvxopt, qpoases and osqp and found that osqp works faster and better for my application.
Now, I want to minimize an indefinite quadratic function with both equality and inequality constraints that may get violated depending on various factors. So I want to use l1 penalty method that penalizes the violating constraints.
I have modified an example, to violate the constraints.
import osqp
import scipy.sparse as sparse
import numpy as np
# Define problem data
P = sparse.csc_matrix([[4., 1.], [1., 2.]])
q = np.array([1., 1.])
A = sparse.csc_matrix([[1., 0.], [0., 1.], [1., 0.], [0., 1.]])
l = np.array([0., 0., 0.2, 1.1])
u = np.array([1., 1., 0.2, 1.1])
# Create an OSQP object
prob = osqp.OSQP()
# Setup workspace and change alpha parameter
prob.setup(P, q, A, l, u, alpha=1.0)
# Solve problem
res = prob.solve()
print res.x
Obviously, this is an infeasible problem, so we need to change the objective function to penalize the error. So, I need help to formulate this problem that can be solved using osqp's python interface.
Or, please let me know if there is any other python interface available to solve this kind of constraint violation problems.
Upvotes: 1
Views: 1351
Reputation: 1
I had the same problem and this question helped a lot. This is how I solved it in OSQP interface.
I redefined example to be:
# Define problem data
P = sparse.csc_matrix([[4., 1.], [1., 2.]])
q = np.array([1., 1.])
A = sparse.csc_matrix([[1., 0.], [0., 1.], [1., 1.]])
l = np.array([0., 0., 3])
u = np.array([1., 1., 3])
Here first and second variable are constrained to be at most 1. But their sum should equal 3. This makes this problem unfeasible.
Now let's transform inequality constraints as Erwin suggested by adding two slack variables.
# Redefine problem data with 2 slack variableы
# Added quadratic penalties to variables s1 and s2 with penalty coefficient == 1
P = sparse.csc_matrix([[4., 1., 0., 0.], [1., 2., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
# Zero linear penalties for s1 and s2.
q = np.array([1., 1., 0., 0.])
# First constraint is x1 <= s1, second is s1 >= 0.
# Third constraint is x2 <= s2, fourth is s2 >= 0.
A = sparse.csc_matrix([[1., 0., -1., 0.], [0., 0., 1., 0.], [0., 1., 0., -1.], [0., 0., 0., 1.], [1., 1., 0., 0.]])
l = np.array([-np.inf, 0., -np.inf, 0., 3])
u = np.array([0., np.inf, 0., np.inf, 3])
When I run solver, problem has a solution and is softly penalised for exceeding upper bounds.
iter objective pri res dua res rho time
1 -4.9403e-03 3.00e+00 5.99e+02 1.00e-01 8.31e-04s
50 1.3500e+01 1.67e-07 7.91e-08 9.96e-01 8.71e-04s
status: solved
number of iterations: 50
optimal objective: 13.5000
run time: 8.93e-04s
optimal rho estimate: 1.45e+00
[1.00 2.00 1.00 2.00]
Hope this helps somebody.
Upvotes: -1
Reputation: 16724
In general abs
functions can be dangerous (they are non-differentiable). A standard way to deal with this is to add slacks. E.g.
g(x) <= 0
becomes
g(x) <= s
s >= 0
Now add a term mu*s
to the objective.
For
h(x) = 0
one could do
h(x) = s1 - s2
s1, s2 >= 0
and add mu*(s1+s2)
to the objective.
As usual: this is just one approach (there are other formulations).
Upvotes: 2