Sean
Sean

Reputation: 1021

Tensorflow how to compute the gradient of output with respect to the input?

Recently, I try to do some experiments and I have a neural network D(x) where x is the input image with batch size 64. I want to compute the gradient of D(x) with respect to x. Should I do the computation as the following?

grad = tf.gradients(D(x), [x])

Thank you everybody!

Upvotes: 3

Views: 4934

Answers (2)

Mohammad Amin
Mohammad Amin

Reputation: 454

Yes, you will need to use tf.gradients. For more details see https://www.tensorflow.org/api_docs/python/tf/gradients.

Upvotes: 1

Ujjwal
Ujjwal

Reputation: 208

During the training of a neural network, the gradient is generally computed of a loss function with respect to the input. This is because, the loss function can be well defined along with its gradient.

However, if you talk about the gradient of your output D(x), this I assume is some set of vector(s). You will need to define how the gradient will be computed with respect to its input (i.e the layer which generates the output).

The exact details of that implementation depends upon the framework which you are using.

Upvotes: 0

Related Questions