Eigenvalue
Eigenvalue

Reputation: 1133

scipy optimization minimize function not enforcing constraints

So my constraint function is not being imposed correctly it would appear.

import numpy as np 
import scipy.integrate as integrate
import scipy.interpolate as interpolate
import pylab as plt
import scipy.optimize as op
import math

def make_cons(parameter_guess):
    cons=()
    for i in range(0,len(parameter_guess)):
        constraint = {'type': 'ineq', 'fun': lambda parameter_guess:  -math.fabs(parameter_guess[i]) + 1 }
        cons +=(constraint,)
    # print cons
    #cons=({'type': 'ineq', 'fun': lambda parameter_guess:  -parameter_guess+ 1 })
    return cons


def problem(N,IC):
    t=np.linspace(0,5,1000)
    tt=np.linspace(0,5+.5,N+1)
    parameter_guess = .5*np.ones(len(tt))
    res=op.minimize(cost_function, parameter_guess, args=(t,tt,IC), method='SLSQP',constraints=make_cons(parameter_guess))
    true_param= res.x
    print res.message
    print true_param
    generate_state_and_control(true_param,t,tt,IC)


def cost_function(parameter_guess,t,tt,IC):
    #print parameter_guess
    f_p = interpolate.interp1d(tt, parameter_guess)
    sol = integrate.odeint(f, [IC[0],IC[1],0], t, args=(f_p,))
    cost_sol = sol[:,2]
    cost=cost_sol[-1]
    print 'cost ' + str(cost) 
    return cost


def f(y,t,f_p):
    dydt=[-y[0] +2*y[1] , y[0] -.2*y[1] + f_p(t), .5*(y[0]**2 + 2*y[1]**2 + 3*f_p(t)**2)]
    return dydt


def generate_state_and_control(parameters,t,tt,IC):
    f_p = interpolate.interp1d(tt, parameters)
    sol = integrate.odeint(f, [IC[0],IC[1],0], t, args=(f_p,))
    control=f_p(t)
    position=sol[:,0]
    velocity=sol[:,1]
    cost_sol = sol[:,2]
    cost=cost_sol[-1]
    print 'cost ' + str(cost) 
    print parameters
    plt.plot(tt,parameters,label='Control')
    plt.xlabel('time')
    plt.ylabel('u')
    plt.title('Control')
    plt.show()
    plt.clf()
    plt.plot(position,velocity,label='Velocity vs Position')
    plt.xlabel('Position')
    plt.ylabel('Velocity')
    plt.title('Velocity vs Position')
    plt.show()



problem(15,[3,6])

I make my constraints in the make_cons function. I simply say that each one of the variables have to be less than 1 in absolute value (i.e |p_i| <=1 - the function expects to have something of the form g(p_i)>=1 )

However if I run.

problem(15,[3,6])





[ -6.91310983 -11.84886554  -8.39257891  -5.89026938  -3.94611243
  -2.83438566  -1.84550722  -1.18591646  -0.72311117  -0.5668469
   0.10564927  -0.02283327  -0.0312163   -0.08288569   0.34830762   0.5       ]

We can obviously see not all these variables are between -1 and 1.

Someone see the trivial bug I made here?

Upvotes: 4

Views: 2294

Answers (1)

Warren Weckesser
Warren Weckesser

Reputation: 114791

Don't use the absolute value function in the constraint functions. The SLSQP algorithm assumes that the constraint functions are continuously differentiable.

To get the same effect as one of your current constraints, you can create two constraint functions, one to ensure x > -1 and another to ensure x < 1.

For example:

def make_cons(parameter_guess):
    cons=()
    for i in range(0,len(parameter_guess)):
        constraint = {'type': 'ineq', 'fun': lambda x: 1 - x}
        cons +=(constraint,)
        constraint = {'type': 'ineq', 'fun': lambda x: 1 + x}
        cons +=(constraint,)
    return cons

Or you could express the constraint as 1 - x**2 > 0:

def make_cons(parameter_guess):
    cons=()
    for i in range(0,len(parameter_guess)):
        constraint = {'type': 'ineq', 'fun': lambda x: 1 - x**2}
        cons +=(constraint,)
    return cons

That will still not give a good result. It will work much better if you scale your cost function to return values that are relatively small. Simply putting cost /= 100000 before returning from cost_function() makes a big difference.

With those two changes, I get the following at the end of the run:

Optimization terminated successfully.
[ 0.31417892  0.21871057  0.28818131  0.40615797  0.26569214  0.74145029
 -1.00000002 -0.9983564  -0.77176625 -0.10348714 -0.14786611  0.04025887
  0.24103308  0.39788151  0.49343655  0.5       ]
cost 132978.180126
[ 0.31417892  0.21871057  0.28818131  0.40615797  0.26569214  0.74145029
 -1.00000002 -0.9983564  -0.77176625 -0.10348714 -0.14786611  0.04025887
  0.24103308  0.39788151  0.49343655  0.5       ]

Since your constraints are simple constant bounds on the variables, you can use the bounds argument instead of the constraints functions. For example, with the following

res=op.minimize(cost_function, parameter_guess, args=(t,tt,IC),
                method='SLSQP',
                #constraints=make_cons(parameter_guess),
                bounds=[(-1, 1)]*(N+1),
                options={'ftol': 1e-10})

I get an even better result:

Optimization terminated successfully.
[ 0.34405263  0.20874193  0.01236256 -0.88512666  0.90658335  0.65950279
 -0.9576039  -0.99462141 -0.97049943 -0.99994613 -0.99998563 -0.99999957
 -1.          0.09842358  0.47056459  0.5       ]
cost 125446.690335
[ 0.34405263  0.20874193  0.01236256 -0.88512666  0.90658335  0.65950279
 -0.9576039  -0.99462141 -0.97049943 -0.99994613 -0.99998563 -0.99999957
 -1.          0.09842358  0.47056459  0.5       ]

On the other hand, with

res=op.minimize(cost_function, parameter_guess, args=(t,tt,IC),
                method='SLSQP',
                constraints=make_cons(parameter_guess),
                #bounds=[(-1, 1)]*(N+1),
                options={'ftol': 1e-10, 'maxiter': 1000})

the result is even better:

Optimization terminated successfully.
[ 0.26307724  0.05991932  0.24965239 -0.99940001 -0.22487722 -0.99946811
 -0.9997716  -0.99045829 -0.98981196 -0.9946721  -1.         -0.99999917
 -1.         -0.99975764  0.27805723  0.5       ]
cost 116703.409203
[ 0.26307724  0.05991932  0.24965239 -0.99940001 -0.22487722 -0.99946811
 -0.9997716  -0.99045829 -0.98981196 -0.9946721  -1.         -0.99999917
 -1.         -0.99975764  0.27805723  0.5       ]

Go figure.

I recommend experimenting a bit more with the options.

Upvotes: 6

Related Questions