Reputation: 55
One way to do gradient descent in Python is to code it myself. However, given how popular a concept it is in machine learning, I was wondering if there is a Python library that I can import that gives me a gradient descent method (preferably mini-batch gradient descent since it's generally better than batch and stochastic gradient descent, but correct me if I'm wrong).
I checked NumPy and SciPy but couldn't find anything. I have no experience with TensorFlow but looked through their online API. I found tf.train.GradientDescentOptimizer, but there is no parameter that lets me choose a batch size, so I'm rather fuzzy on what it actually is.
Sorry if I sound naive. I'm self-learning a lot of this stuff.
Upvotes: 4
Views: 11745
Reputation: 1
You can apply Mini Batch Gradient Descent using SGDRegressor class of Sklearn, The code is :
from sklearn.linear_model import SGDRegressor
sgd=SGDRegressor(learning_rate='constant',eta0=0.01)
batch_size=15 #specifying the batch size
for i in range(100):
indexes=random.sample(range(x_train.shape[0]),batch_size)
sgd.partial_fit(x_train[indexes],y_train[indexes])
sgd.predict()
Upvotes: 0
Reputation: 3633
To state the obvious, gradient descent is optimizing a function. When you use some implementation of gradient descent from some library, you need to specify the function using this library's constructs. For example, functions are represented as computation graphs in TensorFlow. You cannot just take some pure python function and ask TensorFlow's gradient descent optimizer to optimize it.
If your use case allows you to use TensorFlow computation graphs (and all the associated machinery - how to run the function, compute its gradient, ), tf.train.*Optimizer
would be an obvious choice. Else, it is unusable.
If you need something light, https://github.com/HIPS/autograd is probably the best option of all the popular libraries. Its optimizers can be found here: https://github.com/HIPS/autograd/blob/master/autograd/misc/optimizers.py
Upvotes: 5