Kong
Kong

Reputation: 2422

Tensorflow compute image gradient loss

I am trying to optimize my network over the gradient of the reconstructed image and the ground truth but am receiving this error

InvalidArgumentError: Input is not invertible.

I think it is because tensorflow wants to backpropagate through the image transformation. How do I fix this ?

def image_gradient_loss(y_prediction, y):
    gradient_loss = tf.abs(tf.abs(y_prediction - tf.contrib.image.transform(y_prediction, [1, 0, 1, 0, 0, 0, 0, 0])) - tf.abs(y - tf.contrib.image.transform(y, [1, 0, 1, 0, 0, 0, 0, 0]))) + \
    tf.abs(tf.abs(y_prediction - tf.contrib.image.transform(y_prediction, [0, 0, 0, 0, 1, 1, 0, 0])) - tf.abs(y - tf.contrib.image.transform(y, [0, 0, 0, 0, 1, 1, 0, 0])))
    return tf.reduce_mean(gradient_loss)



loss = image_gradient_loss(y_pred, y)
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.001).minimize(loss)

Upvotes: 2

Views: 2396

Answers (1)

TSimron
TSimron

Reputation: 73

I did these steps and it worked for me:

dy_true, dx_true = tf.image.image_gradients(y_true)
dy_pred, dx_pred = tf.image.image_gradients(y_pred)
term3 = K.mean(K.abs(dy_pred - dy_true) + K.abs(dx_pred - dx_true), axis=-1)

Upvotes: 2

Related Questions