Reputation: 4482
I have a function in python which looks like this:
import numpy as np
def fun(Gp,Ra,Mr,Pot,Sp,Mc,Keep):
if(Keep==True):
return(Pot*np.tanh((Gp+Ra+Mr+ Mc)*Sp ))
Assuming the following data:
import pandas as pd
dt_org = pd.DataFrame({"RA": [0.5, 0.8, 0.9],
"MR": [0.97, 0.95, 0.99],
"POT": [0.25, 0.12, 0.05],
"SP": [0.25, 0.12, 0.15],
"MC": [50, 75, 100],
"COUNTRY": ["GB", "IR", "GR"]
})
I have in total 100 GP
and i want to allocate all of them properly
in order to maximize the objective_function
:
under the restriction that all the 3 elements are positive
According to this post the scipy.optimize
would be the way to go, but i am confused in order how to write the problem down
Update: my try
from scipy.optimize import minimize
y = {'A': {'RA': 0.5, 'MR': 0.97, 'POT': 0.25, 'SP': 0.25, 'MC': MC_1, 'keep': True},
'B': {'RA': 0.8, 'MR': 0.95, 'POT': 0.12, 'SP': 0.12, 'MC': MC_2, 'keep': True},
'C': {'RA': 0.9, 'MR': 0.99, 'POT': 0.05, 'SP': 0.15, 'MC': MC_3, 'keep': True}}
def objective_function(x):
return(
-(fun(x[0], Ra=y['A']['RA'], Mr=y['A']['MR'],
Pot=y['A']['POT'], Sp=y['A']['SP'],
Mc=y['A']['MC'], Keep=y['A']['keep']) +
fun(x[1], Ra=y['B']['RA'], Mr=y['B']['MR'],
Pot=y['B']['POT'], Sp=y['B']['SP'],
Mc=y['B']['MC'], Keep=y['B']['keep']) +
fun(x[2], Ra=y['C']['RA'], Mr=y['C']['MR'],
Pot=y['C']['POT'], Sp=y['C']['SP'],
Mc=y['C']['MC'], Keep=y['C']['keep']))
)
cons = ({'type': 'ineq', 'fun': lambda x: x[0] + x[1] + x[2] - 100})
bnds = ((0, None), (0, None), (0, None))
minimize(objective_function, x0=[1,1,1], args=y, method='SLSQP', bounds=bnds,
constraints=cons)
The problem now is that i get the error ValueError: Objective function must return a scalar
, whereas the output of the fun
function is a scalar
UPDATE 2 (after @Cleb comment) So now i changed the function in:
def objective_function(x,y):
temp = -(fun(x[0], Ra=y['A']['RA'], Mr=y['A']['MR'],
Pot=y['A']['POT'], Sp=y['A']['SP'],
Mc=y['A']['MC'], Keep=y['A']['keep']) +
fun(x[1], Ra=y['B']['RA'], Mr=y['B']['MR'],
Pot=y['B']['POT'], Sp=y['B']['SP'],
Mc=y['B']['MC'], Keep=y['B']['keep']) +
fun(x[2], Ra=y['C']['RA'], Mr=y['C']['MR'],
Pot=y['C']['POT'], Sp=y['C']['SP'],
Mc=y['C']['MC'], Keep=y['C']['keep']))
print("GP for the 1st: " + str(x[0]))
print("GP for the 2nd: " + str(x[1]))
print("GP for the 3rd: " + str(x[2]))
return(temp)
cons = ({'type': 'ineq', 'fun': lambda x: x[0] + x[1] + x[2] - 100})
bnds = ((0, None), (0, None), (0, None))
Now there are 2 problems:
1. the values of x[0],x[1],x[2]
are really close to each other
x[0],x[1],x[2]
is over 100 Upvotes: 3
Views: 363
Reputation: 25997
There is a general issue regarding your objective function that explains why the values you obtain are very close to each other; it is discussed below.
If we first look at the technical aspect, the following works fine for me:
import numpy as np
from scipy.optimize import minimize
def func(Gp, Ra, Mr, Pot, Sp, Mc, Keep):
if Keep:
return Pot * np.tanh((Gp + Ra + Mr + Mc) * Sp)
def objective_function(x, y):
temp = -(func(x[0], Ra=y['A']['RA'], Mr=y['A']['MR'], Pot=y['A']['POT'], Sp=y['A']['SP'], Mc=y['A']['MC'], Keep=y['A']['keep']) +
func(x[1], Ra=y['B']['RA'], Mr=y['B']['MR'], Pot=y['B']['POT'], Sp=y['B']['SP'], Mc=y['B']['MC'], Keep=y['B']['keep']) +
func(x[2], Ra=y['C']['RA'], Mr=y['C']['MR'], Pot=y['C']['POT'], Sp=y['C']['SP'], Mc=y['C']['MC'], Keep=y['C']['keep']))
return temp
y = {'A': {'RA': 0.5, 'MR': 0.97, 'POT': 0.25, 'SP': 0.25, 'MC': 50., 'keep': True},
'B': {'RA': 0.8, 'MR': 0.95, 'POT': 0.12, 'SP': 0.12, 'MC': 75., 'keep': True},
'C': {'RA': 0.9, 'MR': 0.99, 'POT': 0.05, 'SP': 0.15, 'MC': 100., 'keep': True}}
cons = ({'type': 'ineq', 'fun': lambda x: x[0] + x[1] + x[2] - 100.})
bnds = ((0., None), (0., None), (0., None))
print(minimize(objective_function, x0=np.array([1., 1., 1.]), args=y, method='SLSQP', bounds=bnds, constraints=cons))
This will print
fun: -0.4199999999991943
jac: array([ 0., 0., 0.])
message: 'Optimization terminated successfully.'
nfev: 6
nit: 1
njev: 1
status: 0
success: True
x: array([ 33.33333333, 33.33333333, 33.33333333])
As you can see, x
nicely sums up to 100
.
If you now change bnds
to e.g.
bnds = ((40., 50), (0., None), (0., None))
then the result will be
fun: -0.419999999998207
jac: array([ 0., 0., 0.])
message: 'Optimization terminated successfully.'
nfev: 6
nit: 1
njev: 1
status: 0
success: True
x: array([ 40., 30., 30.])
Again, the constraint is met.
One can also see that the objective value is the same. That seems to be due to the fact that Mc
and Gp
are very large, therefore, np.tanh
will always just return 1.0
. That implies that you always return just the value Pot
from func
for all your three dictionaries in y
. If you sum up the three corresponding values
0.25 + 0.12 + 0.05
you indeed get the value 0.42
which is determined by the optimization.
Upvotes: 1