Reputation: 67
I know that some of the targets is to minimize the loss function, but what if the loss is also a function including minimum, how could I write the loss correctly? This may seems a little confusing, let me take an example.
The loss function is defined as follows:
where f1, f2
is the feature map output of some network and b
is a shift distance. The shift of a feature map is like [1, 2, 3, 4, 5]
shift one step left is [2, 3, 4, 5, 1]
.
The question is how could I write this loss function using tensorflow since b is not trainable and the trainable variable is the weight in network to generate the feature map. It seems possible in Torch since I could somehow make a for loop maybe. How could I achieve this in Tensorflow?
Upvotes: 0
Views: 738
Reputation: 1213
Tensor flow has a tf.minimum(x,y)
which returns the minimum between x and y.
https://www.tensorflow.org/api_docs/python/tf/minimum
You can trust that if there is a tensorflow operation for it than it automatically calculates the gradient, and therefore can be optimized.
Upvotes: 1