Ghost here
Ghost here

Reputation: 35

Where do these nan come from?

EDIT :
I had some nan in my data but the anwser is correct, you have to initialize your weight with some noise ! Thanks !

I'm doing my first script with tensorflow. I had some problem printing value but now i got it. I wanted to try a simple Logistic regression to start, i'm working on kaggle titanic dataset.

My problem is that i don't know why but i got some nan in my Weight and Bias so in my y (prediction) vector too ...

EDIT : My weight was initialized at 0 so i was a null gradient i guess. According to the answer provided i add

W = tf.truncated_normal([5, 1], stddev=0.1)

instead of

 W = tf.Variable(tf.zeros([5, 1])) #weight for softmax

But i still got some issues. My b variable a y variable are still nan and when i tried the same thing for b i got the following error : ValueError: No variables to optimize
I tried several way to assign my biais of a tensor [1, 1] but it looks like i'm missing something
looks like y is nan because cross entropy is nan because b is nan... :( END - EDIT

i read this post (Why does TensorFlow return [[nan nan]] instead of probabilities from a CSV file?) who give me an hint, during my cross entropy calcul 0*log(0) return nan so i applied the solution given, that is to add 1e-50 like :

cross_entropy = -tf.reduce_sum(y_*tf.log(y + 1e-50))

unfortunatly it wasn't the problem i guess, i still got nan everywhere :(

this is the interisting (i guess) part of my very simple model :

x = tf.placeholder(tf.float32, [None, 5]) #placeholder for input data

W = tf.truncated_normal([5, 1], stddev=0.1)

b = tf.Variable(tf.zeros([1])) # no error but nan
#b = tf.truncated_normal([1, 1], stddev=0.1) Thow the error descript above
#b = [0.1] no error but nan

y = tf.nn.softmax(tf.matmul(x, W) + b) #our model -> pred from model

y_ = tf.placeholder(tf.float32, [None, 1])#placeholder for input 

cross_entropy = -tf.reduce_sum(y_*tf.log(y)) # crossentropy cost function

train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
init = tf.initialize_all_variables() # create variable

sess = tf.InteractiveSession()
sess.run(init)

testacc = []
trainacc = []
for i in range(15):
    batch_xs = train_input[i*50:(i + 1) * 50]
    batch_ys = train_label[i*50:(i + 1) * 50]

    result = sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

correct_prediction = tf.equal(y,y_)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run([accuracy, W, y] , feed_dict={x: test_input, y_: test_label}))

It returns me a 0.0 accuracy of course and after 2 array of nan I tried to print values everywhere but nan everywhere :'(

Do somebody has an idea ? I may forgot something or doing it wrong

The thing is that i tried a similar script with mnist (google tutorial) with included data and it's works (no nan). I get my data with panda reading the csv file.

Thanks for reading !

Upvotes: 1

Views: 835

Answers (1)

Yaroslav Bulatov
Yaroslav Bulatov

Reputation: 57893

You are getting a division by zero in tf.nn.softmax since your weight matrices are zero. Use a different normalization method, like truncated_normal from the MNIST example

Upvotes: 3

Related Questions