Reputation: 11
I wanted to run a supervised learning algorithm with a specified hypothesis which has a parameter theta in an unusual position.
y = theta1 * (exp(theta2 * X)) + theta0
I tried using gradient descent with the following function:
Code :
m = length(y);
num_iters = 500;
J_history = zeros(num_iters, 1);
alpha = 0.1;
theta = zeros(3, 1);
for q = 1:m
A(q,:) = [2, (2*exp(theta(3, 1) * X(q, 1))), (2*theta(2, 1)*X(q, 1)*exp(theta(3, 1) * X(q, 1)))];
end
for iter = 1:num_iters
num_theta = length(theta);
for j = 1:num_theta
inner_sum = 0;
for i = 1:m
inner_sum = inner_sum + (theta(2, 1)*(exp(X(i, 1)*theta(3, 1))) + theta(1, 1) - y(i, 1)) * A(i, j);
end
theta(j, 1) = theta(j, 1) - (alpha * inner_sum / m)
end
J_history(iter) = compute_cost(X, y);
end
% Save the cost J in every iteration
J_history(iter) = compute_cost(X, y);
end
where compute_cost is my cost function which is:
predictions = theta(2, 1)*(exp(X*theta(3, 1))) + theta(1, 1); %hypothesis
sqrErrors = (predictions - y).^2;
J = sum(sqrErrors)/(2*m);
Now this is where I reached a hiatus as my theta(3, 1)==theta2 is becoming to be zero when I take initial value of theta to be zeros(3, 1) and it takes a value of infinite when my initial theta was ones(3, 1)
So, can I use this hypothesis for linear regression or are there any other similar hypothesis functions that can be used instead of current hypothesis.
Upvotes: 1
Views: 64
Reputation: 11434
The Python function scipy.optimize.curve_fit
has an example where exactly your function is fit!
Check it: https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.optimize.curve_fit.html
Upvotes: 0