may
may

Reputation: 79

Gradient descent algorithm giving incorrect answer in matlab

I'm taking a course in machine learning and trying to implement the gradient descent algorithm in matlab. The function computeCost works fine, as I have tested it separately. I use it to see the cost at every iteration, and it doesn't seem to be decreasing at all. It just fluctuates randomly. The value of alpha was given to be 0.01, so I know it's not an issue with the learning rate being too big. The answers I get for theta are very off from the expected output. Where am I going wrong? Thanks in advance!

function theta = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta

% Initialize some useful values
m = length(y); % number of training examples

temp1=0;
temp2=0;
for iter = 1:num_iters
for k = 1:m
    temp1 = temp1 + (theta(1) + theta(2)*X(k, 2) - y(k));
    temp2 = temp2 + ((theta(1) + theta(2)*X(k, 2) - y(k))*X(k, 2));

end 

theta(1) = theta(1)-(1/m)*alpha*temp1;
theta(2) = theta(2)-(1/m)*alpha*temp2;



computeCost(X, y, theta)

end

end

Edit: here is computeCost as well

function J = computeCost(X, y, theta)
m = length(y); % number of training examples


J = 0;
temp = 0;
for index = 1:m
    temp = temp + (theta(1) + theta(2)*X(index, 2)-y(index))^2;

end 
J = temp/(2*m); 
end

Upvotes: 1

Views: 363

Answers (1)

Peter de Rivaz
Peter de Rivaz

Reputation: 33509

Try changing:

temp1=0;
temp2=0;
for iter = 1:num_iters

to

for iter = 1:num_iters
  temp1=0;
  temp2=0;

The gradient needs to be computed fresh for each iteration (or you are effectively building in a momentum term).

Upvotes: 3

Related Questions