anonymous
anonymous

Reputation: 17

Weight and Bias Change by very large amounts and eventually reach inf and NaN-Tensorflow

I am a begginer tensorflow user and I was working on it today and my code is producing an error when I put high numbers as input. This problem did not occur when I plugged in smaller inputs. Here is what it prints... The weight should be 300 and the bias 13000. I put these numbers in just to make sure it was not my file that was the error since I am taking input from a csv file. It produces this error in both cases! Thanks, any help would be amazing!

Code

import tensorflow as tf
import os
import numpy as np

datapoint_size = 20
steps = 10000
# = 300
#actual_b = 13000
learn_rate = 0.0001

w1=tf.Variable(([1.0]),tf.float32)
b=tf.Variable(([1.0]),tf.float32)
x1= tf.placeholder(tf.float32)
y_=tf.placeholder(tf.float32)

init= tf.global_variables_initializer()
sess=tf.Session()
sess.run(init)

i=0

with tf.Session() as sess:
    sess.run( tf.global_variables_initializer())
x=1
sess = tf.Session()
sess.run(init)
y_pred = x1 * w1 + b
squared_deltas = tf.square(y_ - y_pred)
cost = tf.reduce_sum(squared_deltas)
train_step = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)

for i in range(100000):
    if x>=20:
        x=0
    sess.run(train_step, {x1:[1000,2000,3000,4000],y_:[313000,613000,913000,1213000]})
    print("After %d iteration:" % i)
    print("W: %f" % sess.run(w1))
    print("b: %f" % sess.run(b))
    x=x+1

Error

After 0 iteration:
W: 1793999.000000
b: 598.999207
After 1 iteration:
W: -10760400896.000000
b: -3586799.250000
After 2 iteration:
W: 64551647182848.000000
b: 21517219840.000000
After 3 iteration:
W: -387245349602852864.000000
b: -129081785974784.000000
After 4 iteration:
W: 2323085181124570251264.000000
b: 774361712747872256.000000
After 5 iteration:
W: -13936188870901762508193792.000000
b: -4645396869013139619840.000000
After 6 iteration:
W: 83603198942318920401772609536.000000
b: 27867733773982968636768256.000000
After 7 iteration:
W: -501535661866597157445008806117376.000000
b: -167178554811854841375365267456.000000
After 8 iteration:
W: inf
b: 1002904201827890788552906230464512.000000
After 9 iteration:
W: nan
b: -inf
After 10 iteration:
W: nan
b: nan
After 11 iteration:
W: nan
b: nan

Upvotes: 0

Views: 114

Answers (1)

Aaron
Aaron

Reputation: 2364

The problem is just that your learning rate is too big. If you see that your variables are diverging then a good thing to try is lowering the learning rate.

Actually, in your case, the best thing to do is to normalize the inputs so that they have a smaller range. Then you can use a higher learning rate and it will converge more quickly.

Upvotes: 2

Related Questions