Reputation: 250
I am trying to load a previously trained tensor trained model from checkpoint files, now these checkpoint files have op variables in them so to load the graph I have to first load graph_def from **ckpt.meta file:
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
saver = tf.train.import_meta_graph('/data/model_cache/model.ckpt-39.meta')
ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
if os.path.isabs(ckpt.model_checkpoint_path):
saver.restore(sess, ckpt.model_checkpoint_path)
After I have loaded the models I have a method that uses this model for inference to implement deep-dream too. The problem is when I call eval with the default session I get the error below:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 555, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework /ops.py", line 3495, in _eval_using_default_session
raise ValueError("Cannot use the given session to evaluate tensor: "
ValueError: Cannot use the given session to evaluate tensor: the tensor's graph is different from the session's graph.
I have confirmed that tf.get_default_graph()
and sess.graph
are pointing to the same memory address. There has to be something very basic I am missing.
Upvotes: 6
Views: 8059
Reputation: 4090
I think your problem is you are confusing the "Python-name" and "TensorFlow-name". When you create for example: W = tf.get_variable("weight", ...)
the "Python-name" will be W
whereas the "TensorFlow-name" will be weight
.
When loading a model, it has no idea about latest python names. So it will never know what W
actually is.
You should first get back the tensors and operation you want to use. You list them with:
for tensor in tf.get_default_graph().get_operations():
print (tensor.name)
Then use both get_operation_by_name(name)
and get_tensor_by_name(name)
to get your things back.
For example, if you want to get the weights as I told you before you should do:
W = graph.get_tensor_by_name("weights:0")
print(W.eval())
I believe that should work.
Upvotes: 0
Reputation: 1216
It is very likely that the meta graph that you're importing, i.e. /data/model_cache/model.ckpt-39.meta is different form the one that checkpoint tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
was using.
The usual practice is to have get_checkpoint_state()
call (or tf.train.latest_checkpoint(FLAGS.checkpoint_dir)
) and use it's output in import_meta_graph()
call and then, with the same checkpoint name (and returned saver) restore the variables in the session. This, of course, can be done if meta graph is saved in each checkpoint.
Upvotes: 0