Reputation: 2070
I'm currently saving and restoring neural network models using Tensorflow's Saver
class, as shown below:
saver.save(sess, checkpoint_prefix, global_step=step)
saver.restore(sess, checkpoint_file)
This saves .ckpt
files of the model to a specified path. Because I am running multiple experiments, I have limited space to save these models.
I would like to know if there is a way to save these models without saving content in specified directories.
Ex. can I just pass some object at the last checkpoint to some evaluate() function and restore the model from that object?
So far as I see, the save_path
parameter in tf.train.Saver.restore()
is not optional.
Any insight would be much appreciated.
Thanks
Upvotes: 7
Views: 150
Reputation: 1856
You can use the loaded graph and weights to evaluate in the same way that you train. You just need to change the input to be from your evaluation set. Here is some pseudo code of a train loop with an evaluation loop every 1000
iterations (assumes you have created a tf.Session
called sess
):
x = tf.placeholder(...)
loss, train_step = model(x)
for i in range(num_step):
input_x = get_train_data(i)
sess.run(train_step, feed_dict={x: input_x})
if i % 1000 == 0 and i != 0:
eval_loss = 0
for j in range(num_eval):
input_x = get_eval_data(j)
eval_loss += sess.run(loss, feed_dict={x: input_x})
print(eval_loss/num_eval)
If you're using tf.data
for your input then you can just create a tf.cond
to select which input to use:
is_training = tf.placeholder(tf.bool)
next_element = tf.cond(is_training,
lambda: get_next_train(),
lambda: get_next_eval())
get_next_train
and get_next_eval
would have to create all ops that are used for reading the dataset, or else there will be side affects of running the above code.
This way you don't have to save anything to disc if you don't want to.
Upvotes: 1