P227
P227

Reputation: 47

Formulating Nlopt optimisation problem with arguments

I am trying to get to grips with using Nlopt for optimisation in Python. I have created a highly simplified problem that is somewhat analogous to what I intend to use Nlopt for in the future. For reasons I won't get into, I must use such a derivative-free global optimiser. I am unable to understand what I am doing wrong when it comes to passing in additional arguments to both my objective function and my constraint function.

import nlopt
from numpy import *

def objective_func(x,grad,A):

    return x * x + A

def constraint_func(x,grad,args):

    B = args[0]
    C = args[1]

    return B * x + C

A_val = -1
B_val = 1
C_val = 1

arguments = (B_val,C_val)

initial_guess = 30.

opt = nlopt.opt(nlopt.GN_ISRES,1)
opt.set_min_objective(lambda x,grad: objective_func(x,grad,A_val))
opt.add_inequality_constraint(lambda x,grad: constraint_func(x,grad,arguments))
opt.set_lower_bounds(-100.)
opt.set_upper_bounds(100.)
xopt = opt.optimize([initial_guess])
print('xopt: '+str(xopt))

Any help in this would be appreciated.

Cheers

Upvotes: 0

Views: 471

Answers (1)

ken
ken

Reputation: 3211

This is a very misleading error message, especially in your case, but the problem is not the arguments, but the return type.

In nlopt, x is always given as an array, so in your case x is an array of length 1. Therefore, your functions also return an array, which nlopt complains about.

You need to change your functions to return a single float (or np.float) value.

For example:

import nlopt
from numpy import *


def objective_func(x, grad, A):
    # x is a np.ndarray of shape(1,).
    x = x[0]
    return x * x + A


def constraint_func(x, grad, B, C):
    # x is a np.ndarray of shape(1,).
    x = x[0]
    return B * x + C


A_val = -1
B_val = 1
C_val = 1

arguments = (B_val, C_val)

initial_guess = 30.0

opt = nlopt.opt(nlopt.GN_ISRES, 1)
opt.set_min_objective(lambda x, grad: objective_func(x, grad, A_val))
opt.add_inequality_constraint(lambda x, grad: constraint_func(x, grad, B_val, C_val))
opt.set_lower_bounds(-100.0)
opt.set_upper_bounds(100.0)
opt.set_maxtime(10)  # for debug
xopt = opt.optimize([initial_guess])
print("xopt: " + str(xopt))
xopt: [-1.]

Upvotes: 0

Related Questions