Reputation: 181
I have a part of my model that is an inception_v3:
logits, end_points = inception.inception_v3(input, num_classes=num_classes, is_training=trainable)
predictions = end_points['Multi_predictions_pretrained_model'] = tf.nn.sigmoid(
logits, name='Multi_predictions_pretrained_model')
I train it with is_training=True
, than i save my model.
When i evaluate, in a new execution, my model i set is_training=False
.
The problem is that the output of the prediction is almost NAN.
There is a nan : True
Number of nan : 5378
Pre-logits: [[[ 1.90298520e+36 0.00000000e+00 7.08422267e+33 ..., 4.63560017e+34
3.25943330e+36 6.92397968e+35]]]
Logits : [ nan nan nan ..., nan nan nan]
Prediction : [ nan nan nan ..., nan nan nan]
If I set is_training=True
, the model works well; in the prediction i've got zero NAN.
There is a nan: False
Number of nan: 0
Pre-logits: [[[ 0.05161751 0. 0. ..., 0.10696397 0.09036615 0. ]]]
Logits : [ -9.96004391 -10.36448002 -10.86166286 ..., -13.0117816 -9.29876232 -8.85321808]
Prediction : [ 4.72484280e-05 3.15318794e-05 1.91792424e-05 ..., 2.23384995e-06 9.15290802e-05 1.42900652e-04]
What is the difference between False and True? I found that this value acts on dropout and batch_norm.
For Dropout
is_training: A bool `Tensor` indicating whether or not the model
is in training mode. If so, dropout is applied and values scaled.
Otherwise, inputs is returned.
For batch_norm
is_training: Whether or not the layer is in training mode. In training mode
it would accumulate the statistics of the moments into `moving_mean` and
`moving_variance` using an exponential moving average with the given
`decay`. When it is not in training mode then it would use the values of
the `moving_mean` and the `moving_variance`.
How i can resolve this problem?
Thanks.
Upvotes: 1
Views: 647
Reputation: 181
i found a solution.
I follow this guide for batch normalization on tensorflow: http://ruishu.io/2016/12/27/batchnorm/
in particular this:
'''Note: When is_training is True the moving_mean and moving_variance
need to be updated, by default the update_ops are placed in
tf.GraphKeys.UPDATE_OPS so they need to be added as a dependency to
the train_op, example:'''
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
# Ensures that we execute the update_ops before performing the train_step
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
Upvotes: 1