梁生珺
梁生珺

Reputation: 109

After tensorflow training, vgg-net return nan

I training vgg-19 net to classfy cifar10, after training one time, just one, the vgg-net returns nan.

0 [[  4.45161677e+09   2.87961518e+10   4.20765041e+10 ...,          -2.33432433e+10
1.83500431e+10  -1.12923648e+10]
 [  1.18354002e+10   3.38799473e+10   5.86873242e+10 ...,  -4.18343895e+10
2.79392338e+10  -1.61746637e+10]
 [  1.26074880e+09   2.22301839e+10   5.25488333e+10 ...,  -2.92738212e+10
2.51925299e+10  -1.48290714e+10]
 ..., 
 [  1.05694116e+10   2.16351908e+10   5.02961357e+10 ...,  -3.12492278e+10
2.42959094e+10  -1.26112993e+10]
 [  4.72429568e+09   2.75032003e+10   5.14044682e+10 ...,  -3.51395635e+10
2.18048840e+10  -1.46147287e+10]
 [  2.97774285e+09   1.89559747e+10   4.06387917e+10 ...,  -2.35828470e+10
1.96148122e+10  -9.55916698e+09]]
1 [[ nan  nan  nan ...,  nan  nan  nan]
 [ nan  nan  nan ...,  nan  nan  nan]
 [ nan  nan  nan ...,  nan  nan  nan]
 ..., 
 [ nan  nan  nan ...,  nan  nan  nan]
 [ nan  nan  nan ...,  nan  nan  nan]
 [ nan  nan  nan ...,  nan  nan  nan]]

I use tf.train.GradientDescentOptimizer to training vgg net, active function was relu, tf.random_normal to init weight and use tf.nn.xw_plus_b as fully connection layer. So i want to know, why vgg-net return nan, after training.

Upvotes: 0

Views: 361

Answers (1)

J_H
J_H

Reputation: 20568

Reducing the learning rate solves this numeric stability problem.

Upvotes: 1

Related Questions