suwa
suwa

Reputation: 1

Gradient Descent code error - get the same answer for both theta

function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)

    m = length(y); % number of training examples
    J_history = zeros(num_iters, 1);
    h = X * theta; 

    for iter = 1:num_iters
        temp0 = theta(1) - alpha * (1/m) * sum(h - y);
        temp1 = theta(2) - alpha * (1/m) * sum(h - y).*X(:,2);
        theta(1) = temp0;
        theta(2) = temp1;
        J_history(iter) = computeCost(X, y, theta);
    end

i get the same answer for both thetas. Can someone tell me what is wrong with my code

Upvotes: 0

Views: 24

Answers (1)

JimmyOnThePage
JimmyOnThePage

Reputation: 965

Your prediction h needs to be changed inside the loop. Currently, you're making adjustments to theta, but not recalculating the predictions using the new theta values. So your theta values cannot converge. Also, the sum inside the loop is over the whole multiplication operation:

m = length(y); % number of training examples
J_history = zeros(num_iters, 1); 

for iter = 1:num_iters
    h = X * theta
    temp0 = theta(1) - alpha * (1/m) * sum(h - y);
    temp1 = theta(2) - alpha * (1/m) * sum((h - y).*X(:,2));
    theta(1) = temp0;
    theta(2) = temp1;
    J_history(iter) = computeCost(X, y, theta);
end

Upvotes: 1

Related Questions