Reputation: 4325
I have implemented this gradient descent in Numpy:
def gradientDescent(X, y, theta, alpha, iterations):
m = len(y)
for i in range(iterations):
h = np.dot(X,theta)
loss = h-y
theta = theta - (alpha/m)*np.dot(X.T, loss) #update theta
return theta
While other parts of the code are completely vectorized here there still a for loop which seems to me impossible to eliminate; specifically requiring at each step the update of theta I don't see how I could be vectorizing it or writing it in a more efficient way.
Thank you for your help
Upvotes: 3
Views: 778
Reputation: 17704
You can't vectorize the for loop, because each iteration is updating state. Vectorization is primarily used when the calculation can be done such that each iteration is calculating an independent (in some sense) result.
Upvotes: 4