Reputation: 826
I have a network with the output layer of size [3, 13000, 3, 1] (B,H,W,C)
and I transformed it using tf.reduce_mean
to obtain an output size [3, 13000, 1]
.
Is right?
My labels are in the size of [3, 13000, 1]
as my new output and are all values 0 or 1.
Now I have to compute the loss with the labels. To compute this loss I use this formula tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=predict, labels=y))
, but first I have to transform all the values in the output in 0 or 1. I'm using the tf.nn.softmax
function but I get all 1.
How can I implement a function that maps all the values under a threshold to 0 and above 1? And this threshold should be for example (max value - min value) / 2
. This should also work with the gradient in the backprop step.
Upvotes: 0
Views: 1646
Reputation: 17191
Since your prediction is a single class value, when you apply softmax
on it, its going to be always 1 irrespective of the value: (exp(predict)/sum(exp(predict)) = exp(predict)/exp(predict) = 1)
. Either convert the input to one-hot
and make the model predict two classes: [0, 1] or use sigmoid cross entropy
instead.
Upvotes: 1