cmed123
cmed123

Reputation: 705

Tensorflow No gradients provided for any variable with different shape of variable

with tf.GradientTape() as tape:
    images, labels = x
    initial_points = self.model(images, is_training=True)
    final_images = (tf.ones_like(initial_points) + initial_points).numpy()
    final_images = np.expand_dims(final_images, axis=-1)
    final_labels = tf.zeros_like(final_images)
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=final_images, labels=final_labels)
gradients = tape.gradient(loss, self.model.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables))

Why is it that if I modify the shape of the model output using np.expand_dims(), I get the following error:

"ValueError: No gradients provided for any variable ... " when applying the gradients to my model variables? It works fine if I don't have the np.expand_dims() though. Is it because the model loss has to have the same shape as the model output? Or is it non-differentiable?

Upvotes: 1

Views: 225

Answers (2)

user192361237
user192361237

Reputation: 538

The TensorFlow library operates in a very specific matter when you are using tf.GradientTape(). Under this function, it is automatically computing partial derivatives for you in order to update the gradients afterwards. It can do this because each tf function was designed for this specifically.

When you use a NumPy function, however, there is a break in the formula. TensorFlow does not know/understand this function, and thus cannot compute the partial derivative of your loss via the chain rule anymore.

You must use only tf functions under GradientTape() for this reason.

Upvotes: 1

Kartikey Singh
Kartikey Singh

Reputation: 892

Always, use TensorFlow version of NumPy functions, to avoid this kind of error.

with tf.GradientTape() as tape:
    images, labels = x
    initial_points = self.model(images, is_training=True)
    final_images = (tf.ones_like(initial_points) + initial_points).numpy()
    final_images = tf.expand_dims(final_images, axis=-1)
    final_labels = tf.zeros_like(final_images)
    loss = tf.nn.softmax_cross_entropy_with_logits(logits=final_images, labels=final_labels)
gradients = tape.gradient(loss, self.model.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.model.trainable_variables))

Upvotes: 2

Related Questions