Gabriele Vecchio
Gabriele Vecchio

Reputation: 21

Gekko seems to ignore an equation MINLP

I am using Gekko (latest version as of September 2023) to solve the following MINLP problem.

For the problem with constraints in Latex format see here:

LaTeX

$\max_{x} \sum_{i=1}^{K} \sum_{j=1}^{N}x_i p_i b_{i,j} l_j$

subject to the following constraints:

$\sum_{i=1}^{K} sign(x_i) x_i p_i <= M$
$\sum_{i=1}^{K} x_i p_i b^IB_i <= +b_tol$
$\sum_{i=1}^{K} x_i p_i b^IB_i >= -b_tol$

I also don't want a solution of all zero, so I impose this additional constraint:

$\sum_{i=1}^{K} \abs(x_i) > 0$

I have implemented this problem using GEKKO as follows:

b_IB = np.array([1.1, 1.2, 0.9, 0.8])
b = np.array(([1, 2, 4],[6, 3, 8], [9, 11, 12], [0.5, 0.2, 0.3]))
l = np.array([0.06, 0.05, 0.10])
p = np.array([100, 350, 220, 120])
K,N = np.shape(b)
beta_tol = 1000
M = 100000
    
#Solve MINLP problem
m = GEKKO()
x = [m.Var(integer=True) for i in range(K)]
m.Equation(sum([m.sign3(x[i])*x[i]*p[i] for i in range(K)])<=M)
m.Equation(sum([x[i]*p[i]*b_IB[i] for i in range(K)])<=+beta_tol)
m.Equation(sum([x[i]*p[i]*b_IB[i] for i in range(K)])>=-beta_tol)
m.Equation(sum(np.abs(x))>0)
m.Maximize(np.sum([[x[i]*p[i]*b[i,j]*l[j] for i in range(K)] for j in range(N)]))
m.options.SOLVER=1
m.solve()
print(x)
m.cleanup()

Gekko solved this problem very quickly but gave me a solution of all zero. This solution clearly violates the last constraint $\sum_{i=1}^{K} \abs(x_i) > 0$.

Can Gekko simply ignore a constraint like that? What am I doing wrong?

This is the output I get from Gekko:

apm 51.183.113.27_gk_model9 <br><pre> ----------------------------------------------------------------
 APMonitor, Version 1.0.1
 APMonitor Optimization Suite
 ----------------------------------------------------------------
 
 
 --------- APM Model Size ------------
 Each time step contains
   Objects      :            0
   Constants    :            0
   Variables    :           24
   Intermediates:            0
   Connections  :            0
   Equations    :           17
   Residuals    :           17
 
 Number of state variables:             24
 Number of total equations: -           16
 Number of slack variables: -           12
 ---------------------------------------
 Degrees of freedom       :             -4
 
 * Warning: DOF <= 0
 ----------------------------------------------
 Steady State Optimization with APOPT Solver
 ----------------------------------------------
Iter:     1 I:  0 Tm:      0.00 NLPi:    2 Dpth:    0 Lvs:    3 Obj:  0.00E+00 Gap:       NaN
--Integer Solution:   0.00E+00 Lowest Leaf:   0.00E+00 Gap:   0.00E+00
Iter:     2 I:  0 Tm:      0.00 NLPi:    2 Dpth:    1 Lvs:    3 Obj:  0.00E+00 Gap:  0.00E+00
 Successful solution
 
 ---------------------------------------------------
 Solver         :  APOPT (v1.0)
 Solution time  :   3.579999999783468E-002 sec
 Objective      :   0.000000000000000E+000
 Successful solution
 ---------------------------------------------------
 
[[0.0], [0.0], [0.0], [0.0]]

Upvotes: 2

Views: 83

Answers (2)

John Hedengren
John Hedengren

Reputation: 14376

Gekko has a solver tolerance so >0 and >=0 are equivalent with numerical solutions. To give the intended solution, use m.abs3() and set the constraint to >0.01.

m.Equation(m.sum([m.abs3(x[i]) for i in range(K)])>0.01)

Also, use m.sum() for improved performance instead of np.sum() or sum(). Here is a complete script:

from gekko import GEKKO
import numpy as np

b_IB = np.array([1.1, 1.2, 0.9, 0.8])
b = np.array(([1, 2, 4],[6, 3, 8], [9, 11, 12], [0.5, 0.2, 0.3]))
l = np.array([0.06, 0.05, 0.10])
p = np.array([100, 350, 220, 120])
K,N = np.shape(b)
beta_tol = 1000
M = 100000
    
#Solve MINLP problem
m = GEKKO()
x = [m.Var(integer=True) for i in range(K)]
m.Equation(m.sum([m.sign3(x[i])*x[i]*p[i] for i in range(K)])<=M)
m.Equation(m.sum([x[i]*p[i]*b_IB[i] for i in range(K)]) <=  beta_tol)
m.Equation(m.sum([x[i]*p[i]*b_IB[i] for i in range(K)]) >= -beta_tol)
m.Equation(m.sum([m.abs3(x[i]) for i in range(K)])>0.01)
m.Maximize(m.sum([sum([x[i]*p[i]*b[i,j]*l[j] \
           for i in range(K)]) \
           for j in range(N)]))
m.options.SOLVER=1
m.solve()
print(x)
m.cleanup()

The solution is:

--Integer Solution:   8.40E+00 Lowest Leaf:   5.04E+00 Gap:   5.00E-01
Iter:    41 I:  0 Tm:      0.00 NLPi:    3 Dpth:    4 Lvs:    4 Obj:  5.04E+02 Gap:  5.00E-01
Iter:    42 I: -1 Tm:      0.00 NLPi:    1 Dpth:    6 Lvs:    3 Obj:  5.04E+00 Gap:  5.00E-01
--Integer Solution:   8.40E+00 Lowest Leaf:   5.04E+00 Gap:   5.00E-01
Iter:    43 I:  0 Tm:      0.00 NLPi:    3 Dpth:    6 Lvs:    2 Obj:  5.04E+02 Gap:  5.00E-01
Iter:    44 I: -1 Tm:      0.00 NLPi:    1 Dpth:    5 Lvs:    1 Obj:  5.04E+00 Gap:  5.00E-01
--Integer Solution:   8.40E+00 Lowest Leaf:   5.04E+02 Gap:  -1.93E+00
Iter:    45 I:  0 Tm:      0.00 NLPi:    2 Dpth:    5 Lvs:    1 Obj:  5.04E+02 Gap: -1.93E+00
 Successful solution
 
 ---------------------------------------------------
 Solver         :  APOPT (v1.0)
 Solution time  :   0.334900000001653      sec
 Objective      :    8.40000000000000     
 Successful solution
 ---------------------------------------------------
 
[[0.0], [0.0], [0.0], [-1.0]]

Upvotes: 1

Reinderien
Reinderien

Reputation: 15283

I don't have Gekko so I demonstrate with scipy.

To start, your objective is just a dot product with

pbl = -p * (b @ l)

If you were to use minimize, it could look like

def cost(x: np.ndarray) -> float:
    return x.dot(pbl)

def jac(x: np.ndarray) -> np.ndarray:
    return pbl

x0 = np.zeros(K)
error = check_grad(func=cost, grad=jac, x0=x0)
assert error < 1e-6

constraint_beta = LinearConstraint(
    A=p*b_IB, lb=-beta_tol, ub=beta_tol,
)

# ...

but since it's unbounded and requires an abs, it won't work very well - it's nondifferentiable.

Better to just model it as linear:

import numpy as np
from scipy.optimize import Bounds, LinearConstraint, milp
import scipy.sparse as sp


b_IB = np.array((1.1, 1.2, 0.9, 0.8))
b = np.array((
    (1.0,  2.0,  4.0),
    (6.0,  3.0,  8.0),
    (9.0, 11.0, 12.0),
    (0.5,  0.2,  0.3),
))
l = np.array((0.06, 0.05, 0.10))
p = np.array((100, 350, 220, 120))
K, N = b.shape
beta_tol = 1e3
M = 1e5

# Combined p, b and l contribution to objective
pbl = p * (b @ l)

# Variables: x (K), |x| (K)
# Maximize x.pbl; minimize xabs >= |x|
c = np.concatenate((-pbl, pbl/10))
integrality = np.concatenate((np.ones(K), np.zeros(K)))

constraint_beta = LinearConstraint(
    A=sp.hstack((
        p * b_IB,
        sp.csc_array((1, K)),
    ), format='csc'),
    lb=-beta_tol, ub=beta_tol,
)

constraint_m = LinearConstraint(
    A=sp.hstack((
        sp.csc_array((1, K)),
        p,
    ), format='csc'),
    ub=M,
)

# xabs >= +x: -x + xabs >= 0
# xabs >= -x: +x + xabs >= 0
constraint_xabs = LinearConstraint(
    A=sp.bmat(
        (
            (-sp.eye(K), sp.eye(K)),
            ( sp.eye(K), sp.eye(K)),
        ), format='csc',
    ),
    lb=np.zeros(2*K),
)

result = milp(
    c=c, integrality=integrality,
    bounds=Bounds(),
    constraints=(
        constraint_beta,
        constraint_m,
        constraint_xabs,
    ),
)
assert result.success, result.message
x, xabs = np.split(result.x, (K,))

print(result.message)
print(' x  =', x)
print('|x| =', xabs)
print(f'f = {pbl.dot(x):.2f}')
print(f'M constraint: {xabs.dot(p)} <= {M}')
print(f'β constraint: {-beta_tol} <= {x.dot(p*b_IB)} <= {beta_tol}')
 x  = [   0.    0.  216. -436.]
|x| = [ -0.  -0. 216. 436.]
f = 105158.40
M constraint: 99840.0 <= 100000.0
β constraint: -1000.0 <= 912.0 <= 1000.0

Upvotes: 1

Related Questions