nrobins1
nrobins1

Reputation: 41

Quadratic Programming in python using data from octave

I'm in the midst of converting parts of a MATLAB program in to Python and Octave.

I am using Octave to generate two matrices, then importing those matrices in to python using oct2py. The root of my problem are these lines in MATLAB (H_combined and f_combined below)

handles.options =optimset('algorithm','interior-point-convex','Display','off','TolFun',1e-15,'TolX',1e-10,'MaxFunEvals', 1E5);

handles.x_ridge_combined = quadprog(H_combined, f_combined, [], [], [], [], handles.lb_re, handles.ub_re, handles.x_re_0, handles.options);

Currently, I'm looking for a solution in either Python or Octave that would produce a similar output to no avail.

I have attempted to use quadprog from Octave's optim however I get an output of 120, 1, 1, 1, ..., 1 on x_ridge_combined, rather than an assortment of float values which I would expect. I have verified that H_combined and f_combined are exactly the same as the when run in MATLAB, but I suppose quadprog in Octave does not work the same.

After trying an Octave approach, I attempt to import the values into Python to try using the quadprog package.

Trying quadprog,

print(quadprog.solve_qp(H,f))

yields the error

ValueError: Buffer has wrong number of dimensions (expected 1, got 2)

The types and shapes of H and f are as follows:

<class 'numpy.ndarray'> #H
(123, 123)
<class 'numpy.ndarray'> #f
(1, 123)

Does anybody know why I may be getting these errors? Or any other suggestions on to how to translate that line from MATLAB?

Upvotes: 4

Views: 696

Answers (3)

Max
Max

Reputation: 4045

Although it is a bit off scope, I want to bring the project NLopt into play. As the acronym suggests, it tackles nonlinear optimization but with plenty of global/local, derivative-free/with explicit derivatives algorithms. The reason, why I want to mention it is, that it has an interface for MATLAB, Octave + Python (and C/C++,...). So it makes it very easy to reproduce the solutions in different languages (that is why I came across it); plus, the algorithms are actually faster than the MATLAB-native ones (this is my own experience).

For your problem, I would go with BOBYQA (bounded optimization by quadratic optimization or SLSQP (sequential least-squares quadratic programming). However, you will have to write a cost function rather than hand over matrices

The installation is easy via pip

pip install nlopt

do a little check

import nlopt
# run quick test. Look for "Passed: optimizer interface test"
nlopt.test.test_nlopt()

some quick code on how to use the optimization:

import numpy as np
import nlopt

obj = nlopt.opt(nlopt.LN_BOBYQA,5)
obj.set_min_objective(fnc)

obj.set_lower_bounds(lb)
obj.set_upper_bounds(ub)

def fnc(x, grad):

        """
        The return value should be the value of the function at the point x, 
        where x is a NumPy array of length n of the optimization parameters 
        (the same as the dimension passed to the constructor).

        In addition, if the argument grad is not empty, i.e. grad.size>0, then 
        grad is a NumPy array of length n which should (upon return) be set to 
        the gradient of the function with respect to the optimization parameters 
        at x. That is, grad[i] should upon return contain the partial derivative , 
        for , if grad is non-empty.
        """
        H = np.eye(len(x)) # extampe matrix

        cost = 0.5*x.transpose().dot( H.dot(x) )
        return float(cost) # make sure it is a number

xopt = obj.optimize(x0)

In MATLAB you just need to add the DLLs to you path. I wrote a short wrapper for BOBYQA to mimic the interface of MATLAB (in case, you want to check it out in both languages =P -- let me know, I am using it more often in MATLAB... as the wrapper probably shows^^):

function [x_opt, fval, exitflag] = BOBYQA(fnc,x0,lb,ub, varargin)
% performes a constrained, derivative-free local optimization
%
% --- Syntax:
% x_opt = BOBYQA(fnc,x0,lb,ub)
% x_opt = BOBYQA(...,'MaxEval',10)
% x_opt = BOBYQA(...,'MaxTime',5)
% [x_opt, fval] = BOBYQA(...)
% [x_opt, fval, exitflag] = BOBYQA(...)
% 
% --- Description:
% x_opt = BOBYQA(fnc,x0,lb,ub)  takes a function handle 'func', an initial 
%               value 'x0' and lower and upper boundary constraints 'lb' 
%               and 'ub' as input. Performes a constrained local 
%               optimization using the algorithm BOBYQA from Powell 
%               http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf.
%               Returns the optimal value 'x_opt'.
% x_opt = BOBYQA(...,'MaxEval',10)optional input parameter that defines the
%               maximum number of evaluations.
% x_opt = BOBYQA(...,'MaxTime',5) optional input parameter that defines the
%               maximum allowed time in seconds for the optimization. This 
%               is a soft constraint and may be (slightly) broken.
% [x_opt, fval] = BOBYQA(...) seconds return value is the optimal function 
%               value.
% [x_opt, fval, exitflag] = BOBYQA(...) third return value is the exitflag,
%               see function NLoptExitFlag().
% 
% ------------------------------------------------------------------- 2017

% NLOPT_LN_BOBYQA

% --- parse input
IN = inputParser;
addParameter(IN,'MaxEval',10000, @(x)validateattributes(x,{'numeric'},{'positive'}));
addParameter(IN,'MaxTime',60, @(x)validateattributes(x,{'numeric'},{'positive'}));
parse(IN,varargin{:});

% generic success code: +1
%      stopval reached: +2
%         ftol reached: +3
%         xtol reached: +4
%      maxeval reached: +5
%      maxtime reached: +6
% generic failure code: -1
%    invalid arguments: -2
%        out of memory: -3
%     roundoff-limited: -4

    % set options
    opt = struct();
    opt.min_objective = fnc;
    opt.lower_bounds = lb;
    opt.upper_bounds = ub;



    % stopping criteria
    opt.maxtime = IN.Results.MaxTime; % s  % status = +6
%     opt.fc_tol = FncOpt.STOP_FNC_TOL*ones(size(ParInit)); % +3
%     opt.xtol_rel = FncOpt.STOP_XTOL_REL; % +4
%     opt.xtol_abs = FncOpt.STOP_XTOL_ABS*ones(size(ParInit)); % +4
    opt.maxeval = IN.Results.MaxEval; % status = +5

    % call function
    opt.algorithm = 34;% eval('NLOPT_LN_BOBYQA');

    t_start = tic;
    [x_opt, fval, exitflag] = nlopt_optimize(opt,x0);
    dt = toc(t_start);
    fprintf('BOBYQA took %.5f seconds | exitflag: %d (%s)\n',dt,exitflag,NLoptExitFlag(exitflag))
end

function txt = NLoptExitFlag(exitflag)
% generic success code: +1
%      stopval reached: +2
%         ftol reached: +3
%         xtol reached: +4
%      maxeval reached: +5
%      maxtime reached: +6
% generic failure code: -1
%    invalid arguments: -2
%        out of memory: -3
%     roundoff-limited: -4

switch exitflag
    case 1
        txt = 'generic success code';
    case 2
        txt = 'stopval reached';
    case 3
        txt = 'ftol reached';
    case 4
        txt = 'xtol reached';
    case 5
        txt = 'maxeval reached';
    case 6
        txt = 'maxtime reached';
    case -1
        txt = 'generic failure code';
    case -2
        txt = 'invalid arguments';
    case -3
        txt = 'out of memory';
    case -4
        txt = 'roundoff-limited';
    otherwise
        txt = 'undefined exitflag!';
end
end

Upvotes: 0

Panda_User
Panda_User

Reputation: 309

Yes, although the problem with cvxopt_quadprog is that it is considerably slower for large iterative optimizations of time series as it checks each time if the problem is PSD, which is why I was hoping to make use of quad_prog, which has been proven to be much faster. Ref: https://github.com/stephane-caron/qpsolvers

Upvotes: 0

igrinis
igrinis

Reputation: 13686

Try using cvxopt_quadprog. The author claims it imitates MATLAB quadprog, and it should accept arguments the same way:

def quadprog(H, f, L=None, k=None, Aeq=None, beq=None, lb=None, ub=None):
    """
    Input: Numpy arrays, the format follows MATLAB quadprog function: https://www.mathworks.com/help/optim/ug/quadprog.html
    Output: Numpy array of the solution
    """

Most probably the error is because your f is matrix [1x123], while it should be vector of length [123]. You can try to reshape it :

f = f.reshape(f.shape[1])

Upvotes: 1

Related Questions