Thomas LESIEUR
Thomas LESIEUR

Reputation: 427

Scipy minimise, How to get an int value array as an output

I have this kind of data :

import random
data=random.sample(range(1, 100), 5)
x= [-1,1,1,-1,1]

def f(x,data):
        prod=[a * b for a, b in zip(data, x)]
        result=abs(sum(prod))
        return result

I Would like to find the best x composed of -1 or 1 to minimize the value of f(x)

Maybe we can use scipy.minimise() but how can we add the -1 or 1 as a constrain on the value inside of x ? Does somebody have an idea ?

Upvotes: 0

Views: 190

Answers (2)

joni
joni

Reputation: 7157

You want to solve a mixed-integer linear programming problem (MILP), which aren't supported yet by scipy.optimize.

However, you can use a modelling package like PuLP to formulate your MILP and pass it to a MILP solver. Note that your MIP can be formulated as

(P)

min   |f(x)| = |d_0 * x_0 + ... + d_n * x_n|
s.t. x_i ∈ {-1, 1}  ∀ i = 0,...,n

which is the same as

(P')

min   |f(x)| = |d_0 * (2*x_0 - 1) + ... + d_n * (2*x_n - 1)|
s.t. x_i ∈ {0, 1}  ∀ i = 0,...,n

and can be implemented like this

min abs_obj
s.t. f(x) <= abs_obj
     f(x) >= -1.0*abs_obj
     x_i   ∈ {0, 1}  ∀ i = 0,...,n

In code:

import pulp
import random

data = random.sample(range(1, 100), 5)

# pulp model
mdl = pulp.LpProblem("our_model", sense=pulp.LpMinimize)

# the binary variables x
x = pulp.LpVariable.dicts("x", range(5), cat="Binary")
# the variable that stores the absolute value of the objective
abs_obj = pulp.LpVariable("abs_obj")

# set the MIP objective
mdl += abs_obj

# Define the objective: |f(x)| = abs_obj
mdl += pulp.lpSum((2 * x[i] - 1) * data[i] for i in range(5)) <= abs_obj
mdl += pulp.lpSum((2 * x[i] - 1) * data[i] for i in range(5)) >= -1.0*abs_obj

# solve the problem
mdl.solve()

# your solution
signs = [1 if var.varValue > 0 else -1 for var in x.values()]

Alternatively, if you don't want to use another package, you can use scipy.optimize.minimize and implement a simple penalty method. Thereby you solve the problem (P') by solving the penalty problem

min |f(x)| + Ɛ * (x_0 * (1 - x_0) + ... + x_n * (1 - x_n))
with 0 <= x_i <= 1

where Ɛ is a given penalty parameter. Here, the idea is that the right penalty term equals zero for an integer solution.

Note that as the case may be that you need to solve a sequence of penalty problems to achieve convergence to an integer solution. Thus, I'd highly recommend sticking to a MILP solver instead of implementing a penalty method on your own.

Upvotes: 2

Roman Tz
Roman Tz

Reputation: 99

Yes, you can do it using scipy.optimize.minimize:

from scipy.optimize import minimize
minimize(f, [0] * len(data), args=data, bounds=[(-1, 1)] * len(data))

This call minimizes f which you defined in the original post. It passes a zero array as an initial guess for the minimization problem. The argument f requires is 'data' which is specified by the argument 'args'. The constraints you want are specified by the argument 'bounds' as a list of min/max tuples with the length of the input data.

Upvotes: 0

Related Questions