Reputation: 4926
I have inception_resnet_v2_2016_08_30.ckpt
file which is a pre-trained inception model. I want to restore this model using
saver.restore(sess, ckpt_filename)
But for that, I will be required to write the set of variables that were used while training this model. Where can I find those (a script, or detailed description)?
Upvotes: 2
Views: 1287
Reputation: 827
First of you have get the network architecture in memory. You can get the network architecture from here
Once you have this program with you, use the following approach to use the model:
from inception_resnet_v2 import inception_resnet_v2, inception_resnet_v2_arg_scope
height = 299
width = 299
channels = 3
X = tf.placeholder(tf.float32, shape=[None, height, width, channels])
with slim.arg_scope(inception_resnet_v2_arg_scope()):
logits, end_points = inception_resnet_v2(X, num_classes=1001,is_training=False)
With this you have all the network in memory, Now you can initialize the network with checkpoint file(ckpt) by using tf.train.saver:
saver = tf.train.Saver()
sess = tf.Session()
saver.restore(sess, "/home/pramod/Downloads/inception_resnet_v2_2016_08_30.ckpt")
If you want to do bottle extractions, its simple like lets say you want to get features from last layer, then simply you have to declare predictions = end_points["Logits"]
If you want to get it for other intermediate layer, you can get those names from the above program inception_resnet_v2.py
After that you can call: output = sess.run(predictions, feed_dict={X:batch_images})
Upvotes: 2
Reputation: 1361
I believe the MetaGraph
mechanism is what you need.
EDIT: additionally, take a look at tf.train.NewCheckpointReader
-- it has a get_variable_to_shape_map()
method. See unit test.
Upvotes: 0