hansMoser
hansMoser

Reputation: 11

Gradient descent Matlab

i have a problem with gradient descent in Matlab. I dont know how to build the function.

Default settings:

  max_iter = 1000;
  learing = 1;
  degree = 1;

My logistic regression cost function: (Correct ???)

function [Jval, Jgrad] = logcost(function(theta, matrix, y)
 mb = matrix * theta;
 p = sigmoid(mb);

 Jval = sum(-y' * log(p) - (1 - y')*log(1 - p)) / length(matrix);

if nargout > 1
    Jgrad = matrix' * (p - y) / length(matrix);
end

and now my gradient descent function:

function [theta, Jval] = graddescent(logcost, learing, theta, max_iter)

[Jval, Jgrad] = logcost(theta);
for iter = 1:max_iter 
  theta = theta - learing * Jgrad; % is this correct?
  Jval[iter] = ???

end

thx for all help :), Hans

Upvotes: 0

Views: 4744

Answers (1)

lackadaisical
lackadaisical

Reputation: 1694

You can specify the code of your cost function in a regular matlab function:

function [Jval, Jgrad] = logcost(theta, matrix, y)
    mb = matrix * theta;
    p = sigmoid(mb);

    Jval = sum(-y' * log(p) - (1 - y')*log(1 - p)) / length(matrix);

    if nargout > 1
        Jgrad = matrix' * (p - y) / length(matrix);
    end
end

Then, create your gradient descent method (Jgrad is automatically updated in each loop iteration):

function [theta, Jval] = graddescent(logcost, learing, theta, max_iter)
    for iter = 1:max_iter 
        [Jval, Jgrad] = logcost(theta);
        theta = theta - learing * Jgrad;
    end
end

and call it with a function object that can be used to evaluate your cost:

% Initialize 'matrix' and 'y' ...
matrix = randn(2,2);
y = randn(2,1);

% Create function object.
fLogcost = @(theta)(logcost(theta, matrix, y));

% Perform gradient descent.
[ theta, Jval] = graddescent(fLogcost, 1e-3, [ 0 0 ]', 10);

You can also take a look at fminunc, built in Matlab's method for function optimization which includes an implementation of gradient descent, among other minimization techniques.

Regards.

Upvotes: 1

Related Questions