Derk
Derk

Reputation: 1395

tf.keras GradientTape: get gradient with respect to input

Tensorflow version: Tensorflow 2.1

I want to get the gradients with respect to the input instead of the gradient with respect to the trainable weights. I adjust the example from https://www.tensorflow.org/guide/keras/train_and_evaluate to

import tensorflow as tf
import numpy as np

physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, 'Not enough GPU hardware devices available'
tf.config.experimental.set_memory_growth(physical_devices[0], True)

def loss_fun(y_true, y_pred):
    loss = tf.reduce_mean(tf.square(y_true - y_pred), axis=-1)
    return loss

# Create a dataset
x = np.random.rand(10, 180, 320, 3).astype(np.float32)
y = np.random.rand(10, 1).astype(np.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(1)

# Create a model
base_model = tf.keras.applications.MobileNet(input_shape=(180, 320, 3), weights=None, include_top=False)
x = tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.models.Model(inputs=base_model.input, outputs=output)

for input, target in dataset:

    for iteration in range(400):
        with tf.GradientTape() as tape:
            # Run the forward pass of the layer.
            # The operations that the layer applies
            # to its inputs are going to be recorded
            # on the GradientTape.
            prediction = model(input, training=False)  # Logits for this minibatch

            # Compute the loss value for this minibatch.
            loss_value = loss_fun(target, prediction)

        # Use the gradient tape to automatically retrieve
        # the gradients of the trainable variables with respect to the loss.
        grads = tape.gradient(loss_value, model.inputs)
        print(grads)  # output: [None]
        # Run one step of gradient descent by updating
        # the value of the variables to minimize the loss.
        optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
        optimizer.apply_gradients(zip(grads, model.inputs))

        print('Iteration {}'.format(iteration))

However, this doesnot work, because grads = tape.gradient(loss_value, model.inputs) returns [None]. Is this intended behaviour or not? If yes, what is the recommended way to get the gradients with respect to the input?

Upvotes: 1

Views: 4635

Answers (1)

Derk
Derk

Reputation: 1395

To get it working two things needs to be added:

  1. Converting image to a tf.Variable
  2. Using tape.watch to watch the gradient with respect to the desired variable

    image = tf.Variable(input)
    for iteration in range(400):
        with tf.GradientTape() as tape:
            tape.watch(image)
            # Run the forward pass of the layer.
            # The operations that the layer applies
            # to its inputs are going to be recorded
            # on the GradientTape.
            prediction = model(image, training=False)  # Logits for this minibatch
    
            # Compute the loss value for this minibatch.
            loss_value = loss_fun(target, prediction)
    
        # Use the gradient tape to automatically retrieve
        # the gradients of the trainable variables with respect to the loss.
        grads = tape.gradient(loss_value, image)
        #print(grads)  # output: [None]
        # Run one step of gradient descent by updating
        # the value of the variables to minimize the loss.
        optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
        optimizer.apply_gradients(zip([grads], [image]))
    
        print('Iteration {}'.format(iteration))
    

Upvotes: 1

Related Questions