Reputation: 1
The main idea here is to know how black-box functions are used in objective function definition and how optimization algorithms call these functions.
Let' assume that we have a function defined as follows:
f is an objective function to be minimized for a given problem.Lets say:
f(Xi,Yi)=(Ai.Xi)+(Bi.Xi.Yi) for i=1,2...n
where Yi= N(X1,X2,...Xn) is a blackbox function(a simulation) whose analytical form is unknown which takes all Xi's as inputs
N refers to a network being simulated.
Ai and Bi are constants
The problem has the following constraints:
X1+X2+...+Xn = C
The below function definition is just to show how I've called my simulation results and used it in my optimization objective. I'm also open to suggestions if it could be made even better. (But my main question follows my function definition)
def fun(X):
sim_inpfile(X) # function that has been already defined which is called now by taking x as arguments to write the input file to a simulation
Popen('python Test02.py') # running a python code to execute the simulation
jn.output.Results # Extracting results of simulation by importing a custom made python module called jn
Y=[] # Simulation Result
for i in range(len(jn.output.Results.res)):
if jn.output.Results.res[i].name in nodes:
y += [(jn.output.Results.res[i].pr)] # extracting y as a list from the simulation results
sum = 0 # optimization objective
for i in range(len(a)): # len[a]=len[b]=len[X]=len[Y]
sum += a[i]*X[i]+ b[i]*X[i]*Y[i] #a and b are constants which are lists with the same size as x and y
return sum #the ultimate objective function that takes y(simulation results) as arguments to return the optimization objective value
I call a python optimization solver now.
res = scipy.optimize.minimize(fun, x0, method='SLSQP', bounds=bnds, constraints=cons) # x0=initial x values, bounds= Variable bounds on X, Constraint =equality constraint
Qns:
For each iteration of the algorithm, will the objective function be called for evaluation?
If yes, is the way I've coded, an appropriate way to return my objective
function with the black-box function evaluation at each iteration of
the solver?
Also if there or similar problems in any of python optimization packages or similar questions and examples, please share their links or threads.
I appreciate you for your time and your experience may help me solve this problem.
Upvotes: 0
Views: 1782
Reputation: 33532
This is a very broad question and optimization is a very complex topic! So just a few remarks:
For each iteration of the algorithm, will the objective function be called for evaluation?
As you call it, the function will called multiple times per iteration as you did not provide a jacobian. SLSQP will use numerical-differentiation to reason about the step-direction it will take (and it's length)!
If yes, is the way I've coded, an appropriate way to return my objective function with the black-box function evaluation at each iteration of the solver?
No, probably not!
The first obvious thing is the process-based call of your simulation which induces overhead. If your simulation is dominating in regards to time, this is less relevant, but in this case it's the wrong approach in general.
So despite the fact that there are not much specifics in your question (details / assumptions matter!), scipy is not the most useful library to achieve what you want! There are survey-papers (example) about global-/derivative-free optimization outlining alternatives. One example (arbitrarychosen; open-source) would be CoinOR's RBFOpt where it's page summarizes our problems from above:
It does not assume that f(x) is known in analytical form: f(x) is simply a black-box that, given input values, produces output values. The bounds on the variables x_L, x_U are assumed to be finite. RBFOpt is especially targeted at problems for which each evaluation of the objective function f(x) is expensive (in terms of computing time, or cost, or some other measure) and we want to find a global minimum of the function with as few function evaluations as possible. Since this is a very difficult class of problems (we do not assume availability of first order derivatives), RBFOpt works best on problems that are relatively small dimensional (up to 20 variables, ideally less than 10) and for which the bounding box is not too large.
Upvotes: 1