Reputation: 5027
In the following code, I am writing a four loop to print the Fibonacci's sequence, to practice using TensorFlow. However, after a few iterations, it makes the numbers negative and then just returns zeros. Why? How can I fix this? It works fine if I use floats by the way. Also, why is this soooo slow compared to a straightforward algorithm?
import tensorflow as tf
var1 = tf.Variable(1, tf.int8)
var2 = tf.Variable(1, tf.int8)
temp = tf.Variable(0, tf.int8)
var12 = tf.add(var1, var2)
task1 = tf.assign(var1, var2)
task2 = tf.assign(var2, var12)
task3 = tf.assign(var2, tf.add(var1, temp))
init_op = (tf.initialize_all_variables())
with tf.Session() as sess:
sess.run(init_op)
for _ in range(50):
sess.run(var12)
sess.run(task1)
sess.run(task2)
print(var12.eval(), end=',')
Output: 3,6,12,24,48,96,192,384,768,1536,3072,6144,12288,24576,49152,98304,196608,393216,786432,1572864,3145728,6291456,12582912,25165824,50331648,100663296,201326592,402653184,805306368,1610612736,-1073741824,-2147483648,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
Upvotes: 0
Views: 321
Reputation: 6367
For your main question: It's an issue with how computers store numbers. What you're experiencing here is called an integer overflow.
The numbers don't fit in the variables, and Tensorflow doesn't try to fix this because that would slow it down (maybe you're used to python handling this for you. 999**999
ftw)
Your variables are of type int32
, because that's the default.
You're passing tf.int8
to tf.Variable
. An int8
would roll over at 128. But it's landing in the Trainable
parameter. Switch that to:
tf.Variable(1,dtype=tf.int64)
int64
is larger.
Also, that's not the correct sequence.
It's just doubling the previous element.
That sess.run(var12)
doesn't do anything (var
is a bad name for it, it's not a variable).
var12
is automatically re-calculated when you run task2
.
Why is it slow?
> python fib.py
It's spending almost all of that time importing Tensorflow, it's big.
Good luck, I hope that helps.
Upvotes: 1