Reputation: 215
I have a very simply tensorflow setup but one aspect of it (calculating the accuracy) keeps increasing in how long it takes to run. I'm confused about why this is. I've simplified down the code as much as I can while stile keeping the error. Here is the code
import time
import tensorflow as tf
import numpy as np
# dummy data
data = np.zeros((12, 784))
labels = np.zeros((12, 10))
xs = tf.placeholder(tf.float32, [12, 784])
ys = tf.placeholder(tf.float32, [12, 10])
weights = tf.Variable(tf.truncated_normal([784, 10], stddev=0.1))
prediction = tf.nn.softmax(tf.matmul(xs, weights))
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
while True:
y_pre = sess.run(prediction, feed_dict={xs: data})
correct_prediction = tf.equal(tf.argmax(y_pre, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
start_time = time.time()
r = sess.run(accuracy, feed_dict={xs: data, ys: labels})
time_taken = time.time() - start_time
#why does time_taken keep growing?
print("time_taken", time_taken)
I suspect it's something I'm doing wrong in the while True loop. For my experience time_taken starts of low around 0.01 but then seemingly indefinitely grows to 0.30 and beyond if you leave it long enough. Is there some way to keep the time_taken constant? Any help would be appreciated thanks.
Upvotes: 0
Views: 84