Reputation: 2075
I'm trying to restore my Tensorflow model -- it's a linear regression network. I'm sure I am doing something wrong because my predictions are not good. When I train, I have a test set. My test set predictions look great, but then when I try to restore the same model, predictions look poor.
Here is how I save the model:
with tf.Session() as sess:
saver = tf.train.Saver()
init = tf.global_variables_initializer()
sess.run(init)
training_data, ground_truth = d.get_training_data()
testing_data, testing_ground_truth = d.get_testing_data()
for iteration in range(config["training_iterations"]):
start_pos = np.random.randint(len(training_data) - config["batch_size"])
batch_x = training_data[start_pos:start_pos+config["batch_size"],:,:]
batch_y = ground_truth[start_pos:start_pos+config["batch_size"]]
sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
train_acc, train_loss = sess.run([accuracy, cost], feed_dict={x: batch_x, y: batch_y})
sess.run(optimizer, feed_dict={x: testing_data, y: testing_ground_truth})
test_acc, test_loss = sess.run([accuracy, cost], feed_dict={x: testing_data, y: testing_ground_truth})
samples = sess.run(pred, feed_dict={x: testing_data})
# print samples
data.compute_acc(samples, testing_ground_truth)
print("Training\tAcc: {}\tLoss: {}".format(train_acc, train_loss))
print("Testing\t\tAcc: {}\tLoss: {}".format(test_acc, test_loss))
print("Iteration: {}".format(iteration))
if iteration % config["save_step"] == 0:
saver.save(sess, config["save_model_path"]+str(iteration)+".ckpt")
Here are some examples from my test set. You'll notice My prediction
is relatively close to Actual
:
My prediction: -12.705 Actual : -10.0
My prediction: 0.000 Actual : 8.0
My prediction: -14.313 Actual : -23.0
My prediction: 17.879 Actual : 13.0
My prediction: 17.452 Actual : 24.0
My prediction: 22.886 Actual : 29.0
Custom accuracy: 5.0159861487
Training Acc: 5.63836860657 Loss: 25.6545143127
Testing Acc: 4.238052845 Loss: 22.2736053467
Iteration: 6297
Then here's how I restore the model:
with tf.Session() as sess:
saver = tf.train.Saver()
saver.restore(sess, config["retore_model_path"]+"3000.ckpt")
init = tf.global_variables_initializer()
sess.run(init)
pred = sess.run(pred, feed_dict={x: predict_data})[0]
print("Prediction: {:.3f}\tGround truth: {:.3f}".format(pred, ground_truth))
But here's what the predictions look like. You'll notice that Prediction
is always right around 0:
Prediction: 0.355 Ground truth: -22.000
Prediction: -0.035 Ground truth: 3.000
Prediction: -1.005 Ground truth: -3.000
Prediction: -0.184 Ground truth: 1.000
Prediction: 1.300 Ground truth: 5.000
Prediction: 0.133 Ground truth: -5.000
Here is my tensorflow version (yes I need to update):
Python 2.7.6 (default, Oct 26 2016, 20:30:19)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> print(tf.__version__)
0.12.0-rc1
Not sure if this helps, but I tried placing the saver.restore()
call after the sess.run(init)
and I get predictions that are all the same. I think this is because sess.run(init)
initializes the variables.
Change the ordering like this:
sess.run(init)
saver.restore(sess, config["retore_model_path"]+"6000.ckpt")
But then predictions look like this:
Prediction: -15.840 Ground truth: 2.000
Prediction: -15.840 Ground truth: -7.000
Prediction: -0.000 Ground truth: 12.000
Prediction: -15.840 Ground truth: -9.000
Prediction: -15.175 Ground truth: -27.000
Upvotes: 1
Views: 352
Reputation: 32111
When you restore from checkpoint, you don't initialize your variables. As you noted at the end of your question.
init = tf.global_variables_initializer()
sess.run(init)
That overwrites the variables, which you just restored. oops! :)
Comment those two lines out and I suspect you'll be good to go.
Upvotes: 2