Reputation: 61
I recently was reading a Pytorch code and came across loss.backward()
and optimizer.step()
functions, are there any equivalent of these using tensorflow/keras?
Upvotes: 6
Views: 2221
Reputation:
loss.backward()
equivalent in tensorflow is tf.GradientTape()
. TensorFlow provides the tf.GradientTape
API for automatic differentiation - computing the gradient of computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape
onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using reverse mode differentiation.
optimizer.step()
equivalent in tensorflow is minimize()
. Minimizes the loss by updating the variable list. Calling minimize()
takes care of both computing the gradients and applying them to the variables.
If you want to process the gradients before applying them you can instead use the optimizer in three steps:
tf.GradientTape
.apply_gradients()
.Hope this answers your question. Happy Learning.
Upvotes: 13