sapo_cosmico
sapo_cosmico

Reputation: 6532

How can I minimize a function in Python, without using gradients, and using constraints and ranges?

EDIT: looks like this was already answered before here

It didn't show up in my searches because I didn't know the right nomenclature. I'll leave the question here for now in case someone arrives here because of the constraints.


I'm trying to optimize a function which is flat on almost all points ("steps function", but in a higher dimension).

The objective is to optimize a set of weights, that must sum to one, and are the parameters of a function which I need to minimize.

The problem is that, as the function is flat at most points, gradient techniques fail because they immediately converge on the starting "guess".

My hypothesis is that this could be solved with (a) Annealing or (b) Genetic Algos. Scipy sends me to basinhopping. However, I cannot find any way to use the constraint (the weights must sum to 1) or ranges (weights must be between 0 and 1) using scipy.

Actual question: How can I solve a minimization problem without gradients, and also use constraints and ranges for the input variables?


The following is a toy example (evidently this one could be solved using the gradient):

# import minimize 
from scipy.optimize import minimize

# define a toy function to minimize 
def my_small_func(g): 
    x = g[0]
    y = g[1]
    return x**2 - 2*y + 1

# define the starting guess 
start_guess = [.5,.5]

# define the acceptable ranges (for [g1, g2] repectively)
my_ranges = ((0,1),(0,1))

# define the constraint (they must always sum to 1)
def constraint(g): 
    return g[0] + g[1] - 1 
cons = {'type':'eq', 'fun': constraint}

# minimize 
minimize(my_small_func, x0=start_guess, method='SLSQP',
         bounds=rranges, constraints=cons)

Upvotes: 1

Views: 2997

Answers (1)

Vandenman
Vandenman

Reputation: 3176

I usually use R so maybe this is a bad answer, but anyway here goes.

You can solve optimization problems like the using a global optimizer. An example of this is Differential Evolution. The linked method does not use gradients. As for constraints, I usually build them manually. That looks something like this:

    # some dummy function to minimize
    def objective.function(a, b)
      if a + b != 1 # if some constraint is not met
        # return a very high value, indicating a very bad fit
        return(10^90)
      else 
        # do actual stuff of interest
        return(fit.value)

Then you simply feed this function to the differential evolution package function and that should do the trick. Methods like differential evolution are made to solve in particular very high dimensional problems. However the constraint you mentioned can be a problem as it will likely result in very many invalid parameter configurations. This is not necessarily a problem for the algorithm, but is simply means you need to do a lot of tweaking and need to expect a lot of waiting time. Depending on your problem, you could try optimizing weights/ parameters in blocks. That means, optimize parameters given a set of weights, then optimize weights given the previous set of parameters and repeat that many times.

Hope this helps :)

Upvotes: 1

Related Questions