gilbertoastolfi
gilbertoastolfi

Reputation: 47

Keras Custom Layer ValueError: An operation has `None` for gradient

I have created a custom Keras Layer. The model compiles fine, but gives me the following error while training:

ValueError: An operation has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

Is there any implementation error in my custom layer?

class SpatialLayer(Layer):

    def __init__(self, output_dim, **kwargs):
        self.output_dim = output_dim
        super(SpatialLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        self.bias = None
        self.built = True
        self.kernelA = self.add_weight(name='kernelA', shape=(input_shape[1]-2, self.output_dim), initializer='uniform', trainable=True)

    def compute_output_shape(self, input_shape):
        return (input_shape[0], input_shape[1]-2, input_shape[1]-2, self.output_dim)


    def call(self, inputs):
        x_shape = tf.shape(inputs)
        top_values, top_indices = tf.nn.top_k(tf.reshape(inputs, (-1,)), 10, sorted=True,)
        top_indices = tf.stack(((top_indices // x_shape[1]), (top_indices % x_shape[1])), -1)
        top_indices = tf.cast(top_indices, dtype=tf.float32)
        t1 = tf.reshape(top_indices, (1,10,2))
        t2 = tf.reshape(top_indices, (10,1,2))
        result = tf.norm(t1-t2, ord='euclidean', axis=2)
        x = tf.placeholder(tf.float32, shape=[None, 10, 10, 1])
        tensor_zeros = tf.zeros_like(x)
        matrix = tensor_zeros + result
        return K.dot(matrix, self.kernelA)


    model = applications.VGG16(weights = "imagenet", include_top=False, input_shape = (img_width, img_height, 3))
    model.layers.pop()
    new_custom_layers = model.layers[-1].output
    model.layers[-1].trainable = False

    new_custom_layers = Conv2D(filters=1, kernel_size=(3, 3))(new_custom_layers)
    new_custom_layers = SpatialLayer(output_dim=1)(new_custom_layers)
    new_custom_layers = Flatten()(new_custom_layers)
    new_custom_layers = Dense(1024, activation="relu")(new_custom_layers)
    new_custom_layers = Dropout(0.5)(new_custom_layers)
    new_custom_layers = Dense(1024, activation="relu")(new_custom_layers)

Any help would be appreciated.

Explanation

The input to my custom Keras Layer is a tensor (?, 12,12,1) that represents a feature map from a given image. For example:

[[147.00  20.14 ... 0 34.2  0   ]
 [ 12.00  10.14 ... 0 45.2  0   ]
 ...
 [100.00  60.14 ... 0 34.2  99.1]
 [ 90.00  65.14 ... 0 12.2  00.1]]

I want to get the coordinates of the top 10 values from this tensor, for example: (0,0), (10,0) ...., (10,11), i.e., 10 coordinates.

Finally, I want to calculate a distance matrix between the coordinates. I am using euclidean distance. For example:

       coord1 coord2 ... coord9 cood10
coord1   0     12.3       13.1   2.3
coord2  1.3      0        3.2    9.1
  .
  .
  .
coord9  4.2     5.2        0     4.2
coor10  1.1     5.6       9.1     0

This matrix (?, 10,10,1) will be the layer output.

Upvotes: 1

Views: 187

Answers (1)

Daniel Möller
Daniel Möller

Reputation: 86600

You cannot backpropagate through functions that are not differentiable. And your function is not differentiable.

You discarded the values top_values and kept only integer constants top_indices.

The only way to use this layer in a model is if everything before it is not trainable. (Or if you find another way of calculating what you want in a differentiable way - this means: operations that must involve the input values)

Upvotes: 2

Related Questions