Reputation: 734
A simple optimization example has 2 local minima at (0,0,8)
with objective 936.0
and (7,0,0)
with objective 951.0
. What are techniques to use local optimizers in Python Gekko (APOPT
,BPOPT
,IPOPT
) to find a global solution?
from gekko import GEKKO
m = GEKKO(remote=False)
x = m.Array(m.Var,3,lb=0)
x1,x2,x3 = x
m.Minimize(1000-x1**2-2*x2**2-x3**2-x1*x2-x1*x3)
m.Equations([8*x1+14*x2+7*x3==56,
x1**2+x2**2+x3**2>=25])
m.solve(disp=False)
res=[print(f'x{i+1}: {xi.value[0]}') for i,xi in enumerate(x)]
print(f'Objective: {m.options.objfcnval:.2f}')
This produces a local minimum:
x1: 7.0
x2: 0.0
x3: 0.0
Objective: 951.00
There are solvers for a global optimum such as BARON
, COCOS
, GlobSol
, ICOS
, LGO
, LINGO
, and OQNLP
, but what are some quick strategies that can be used with a local optimizer to search for a global solution? Some industrial applications have highly nonlinear models that haven't been fully tested for global solutions in control and design. Can the strategy be parallelized in Python?
Upvotes: 0
Views: 404
Reputation: 16724
Multi-start approaches (using random starting points) can be useful. No guarantee about global optimality, but at least you are protected a bit against some embarrassingly bad local solutions. Some local NLP solvers have this built-in (e.g. Knitro).
Here is Python code for the example using a multi-start method to get the global solution. It uses multi-threading to parallelize the search.
import numpy as np
import threading
import time, random
from gekko import GEKKO
class ThreadClass(threading.Thread):
def __init__(self, id, xg):
s = self
s.id = id
s.m = GEKKO(remote=False)
s.xg = xg
s.objective = float('NaN')
# initialize variables
s.m.x = s.m.Array(s.m.Var,3,lb=0)
for i in range(3):
s.m.x[i].value = xg[i]
s.m.x1,s.m.x2,s.m.x3 = s.m.x
# Equations
s.m.Equation(8*s.m.x1+14*s.m.x2+7*s.m.x3==56)
s.m.Equation(s.m.x1**2+s.m.x2**2+s.m.x3**2>=25)
# Objective
s.m.Minimize(1000-s.m.x1**2-2*s.m.x2**2-s.m.x3**2
-s.m.x1*s.m.x2-s.m.x1*s.m.x3)
# Set solver option
s.m.options.SOLVER = 1
threading.Thread.__init__(s)
def run(self):
print('Running application ' + str(self.id) + '\n')
self.m.solve(disp=False,debug=0) # solve
# Retrieve objective if successful
if (self.m.options.APPSTATUS==1):
self.objective = self.m.options.objfcnval
else:
self.objective = float('NaN')
self.m.cleanup()
# Optimize at mesh points
x1_ = np.arange(0.0, 10.0, 3.0)
x2_ = np.arange(0.0, 10.0, 3.0)
x3_ = np.arange(0.0, 10.0, 3.0)
x1,x2,x3 = np.meshgrid(x1_,x2_,x3_)
threads = [] # Array of threads
# Load applications
id = 0
for i in range(x1.shape[0]):
for j in range(x1.shape[1]):
for k in range(x1.shape[2]):
xg = (x1[i,j,k],x2[i,j,k],x3[i,j,k])
# Create new thread
threads.append(ThreadClass(id, xg))
# Increment ID
id += 1
# Run applications simultaneously as multiple threads
# Max number of threads to run at once
max_threads = 8
for t in threads:
while (threading.activeCount()>max_threads):
# check for additional threads every 0.01 sec
time.sleep(0.01)
# start the thread
t.start()
# Check for completion
mt = 10.0 # max time (sec)
it = 0.0 # time counter
st = 1.0 # sleep time (sec)
while (threading.active_count()>=3):
time.sleep(st)
it = it + st
print('Active Threads: ' + str(threading.active_count()))
# Terminate after max time
if (it>=mt):
break
# Initialize array for objective
obj = np.empty_like(x1)
# Retrieve objective results
id = 0
id_best = 0; obj_best = 1e10
for i in range(x1.shape[0]):
for j in range(x1.shape[1]):
for k in range(x1.shape[2]):
obj[i,j,k] = threads[id].objective
if obj[i,j,k]<obj_best:
id_best = id
obj_best = obj[i,j,k]
id += 1
print(obj)
print(f'Best objective {obj_best}')
print(f'Solution {threads[id_best].m.x}')
It produces the global solution:
Best objective 936.0
Solution [[0.0] [0.0] [8.0]]
Upvotes: 2
Reputation: 14346
In addition to multi-start random searches there are Bayesian, Genetic Algorithm, and other methods in hyperparameter optimization packages such as hyperopt
. This Medium article has a nice overview of hyperparameter search methods. Below is code to find the global optimum in Python with Gekko. There are suggestions to parallelize: Python and HyperOpt: How to make multi-process grid searching? although I haven't tried these. Here is the specific example with a Tree-structured Parzen estimator.
from gekko import GEKKO
from hyperopt import fmin, tpe, hp
from hyperopt import STATUS_OK, STATUS_FAIL
# Define the search space for the hyperparameters
space = {'x1': hp.quniform('x1', 0, 10, 3),
'x2': hp.quniform('x2', 0, 10, 3),
'x3': hp.quniform('x3', 0, 10, 3)}
def objective(params):
m = GEKKO(remote=False)
x = m.Array(m.Var,3,lb=0)
x1,x2,x3 = x
x1.value = params['x1']
x2.value = params['x2']
x3.value = params['x3']
m.Minimize(1000-x1**2-2*x2**2-x3**2-x1*x2-x1*x3)
m.Equations([8*x1+14*x2+7*x3==56,
x1**2+x2**2+x3**2>=25])
m.options.SOLVER = 1
m.solve(disp=False,debug=False)
obj = m.options.objfcnval
if m.options.APPSTATUS==1:
s=STATUS_OK
else:
s=STATUS_FAIL
m.cleanup()
return {'loss':obj, 'status': s, 'x':x}
best = fmin(objective, space, algo=tpe.suggest, max_evals=50)
sol = objective(best)
print(f"Solution Status: {sol['status']}")
print(f"Objective: {sol['loss']:.2f}")
print(f"Solution: {sol['x']}")
Hyperopt finds the global solution with gekko as the local minimizer:
Solution Status: ok
Objective: 936.00
Solution: [[0.0] [0.0] [8.0]]
Upvotes: 1