Reputation: 81
I have been using MATLAB fminunc function to solve my optimization problem. I want to try the minFunc package :
http://www.di.ens.fr/~mschmidt/Software/minFunc.html
When using fminunc, I defined a function funObj.m which gives me the objective value and the gradient at any point 'x'. It also takes in several external inputs say, {a,b,c} which are matrices. So the function prototype looks like :
function [objVal,G] = funObj(x,a,b,c)
I want to use the same setup in the minFunc package. From the examples, I figured this should work :
options.Method='lbfgs';
f = @(x)funObj(x,a,b,c);
x = minFunc(f,x_init,options);
But when I call this way, I get an error as:
Error using funObj
Too many output arguments.
What is the correct way to call minFunc for my case?
**EDIT : Alright, here is a sample function that I want to use with minFunc. Lets say I want to find the minimum of a*(b-x)^2, where a,b are scalar parameters and x being a scalar too. The MATLAB objective function will then look like :
function obj = testFunc(x,a,b)
obj = a*(b-x)^2;
The function call to minimize this using fminunc (in MATLAB ) is simply:
f = @(x)testFunc(x,a,b);
x = fminunc(f,x_init);
This gives me the minimum of x = 10. Now, How do I do the same using minFunc ?
Upvotes: 4
Views: 2562
Reputation: 11
Agree with Mark. I think the simplest way to solve it is
minFunc(@testFunc, x_init, a, b, c)
In MATLAB temporary function can only have one return value. So f = @(x)testFunc(x,a,b);
let your method drop gradient part every time. Because minFunc
can accept extra paramters, you can pass a
, b
and c
after x_init
. I think this would work.
Upvotes: 0
Reputation: 41
"Note that by default minFunc assumes that the gradient is supplied, unless the 'numDiff' option is set to 1 (for forward-differencing) or 2 (for central-differencing)."
The error is because only one argument is returned by the function. You can either return the gradient as a second argument or turn on numerical differencing.
Upvotes: 4