ScientiaEtVeritas
ScientiaEtVeritas

Reputation: 5278

Feed Iterator to Tensorflow Graph

I have a tf.data.Iterator created with make_one_shot_iterator() and want to use it to train my (existing) model.

Currently my training looks like this

input_node = tf.placeholder(tf.float32, shape=(None, height, width, channels))

net = models.ResNet50UpProj({'data': input_node}, batch_size, keep_prob=True,is_training=True)

labels = tf.placeholder(tf.float32, shape=(None, width, height, 1))
huberloss = tf.losses.huber_loss(predictions=net.get_output(),labels=labels)

And then calling

sess.run(train_op, feed_dict={labels:output_img, input_node:input_img})

After training I can get a prediction like that:

pred = sess.run(net.get_output(), feed_dict={input_node: img})

Now with an iterator I tried something like this

next_element = iterator.get_next()

Passing the input data like this:

net = models.ResNet50UpProj({'data': next_element[0]}, batch_size, keep_prob=True,is_training=True)

Defining the loss function like this:

huberloss = tf.losses.huber_loss(predictions=net.get_output(),labels=next_element[1])

And executing the training as simple as while iterating over the iterator automatically with every call of this:

sess.run(train_op)

My problem is: After training I can't make any prediction. Or rather I don't know the proper way of using the iterator in my case.

Upvotes: 3

Views: 1206

Answers (1)

Siyuan Ren
Siyuan Ren

Reputation: 7844

Solution 1: create a separate sub-graph just for inference, especially when you have layers like batch normalization and dropout (is_training=False).

# The following code assumes that you create variables with `tf.get_variable`. 
# If you create variables manually, you have to reuse them manually.
with tf.variable_scope('somename'):
    net = models.ResNet50UpProj({'data': next_element[0]}, batch_size, keep_prob=True, is_training=True)
with tf.variable_scope('somename', reuse=True):
    net_for_eval = models.ResNet50UpProj({'data': some_placeholder_or_inference_data_iterator}, batch_size, keep_prob=True, is_training=False)

Solution 2: use feed_dict. You can replace almost any tf.Tensor, not just tf.placeholder with a feed dict.

sess.run(huber_loss, {next_element[0]: inference_image, next_element[1]: inference_labels})

Upvotes: 4

Related Questions