Reputation: 678
I have a graph in TensorFlow which I've trained based on a batch size of 32 observations over hundreds of epochs. I now want to to predict some new data based on the trained graph so I've saved it and reloaded it but I'm forced to always pass in the same amount of observations as my batch size because I've declared a placeholder in my graph which corresponds to the batch size. How can I make my graph accept any amount of observations?
How should I configure this so that I can train on any amount of observations and then run a different amount later?
Below is a excerpt of some of the important parts of the code. Building the graph:
graph = tf.Graph()
with graph.as_default():
x = tf.placeholder(tf.float32, shape=[batch_size, self.image_height, self.image_width, 1], name="data")
y_ = tf.placeholder(tf.float32, shape=[batch_size, num_labels], name="labels")
# Layer 1
W_conv1 = weight_variable([patch_size, patch_size, 1, depth], name="weight_1")
b_conv1 = bias_variable([depth], name="bias_1")
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1, name="conv_1") + b_conv1, name="relu_1")
h_pool1 = max_pool_2x2(h_conv1, name="pool_1")
#Layer 2
#W_conv2 = weight_variable([patch_size, patch_size, depth, depth*2])
#b_conv2 = bias_variable([depth*2])
#h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
#h_pool2 = max_pool_2x2(h_conv2)
# Densely Connected Layer
W_fc1 = weight_variable([self.image_height/4 * self.image_width/2 * depth*2, depth], name="weight_2")
b_fc1 = bias_variable([depth], name="bias_2")
h_pool2_flat = tf.reshape(h_pool1, [-1, self.image_height/2 * self.image_width/2 * depth], name="reshape_1")
h_fc1 = tf.nn.relu(tf.nn.xw_plus_b(h_pool2_flat, W_fc1, b_fc1), name="relu_2")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob, name="drop_1")
W_fc2 = weight_variable([depth, num_labels], name="dense_weight")
b_fc2 = bias_variable([num_labels], name="dense_bias")
logits = tf.nn.xw_plus_b(h_fc1_drop, W_fc2, b_fc2)
tf.add_to_collection("logits", logits)
y_conv = tf.nn.softmax(logits, name="softmax_1")
tf.add_to_collection("y_conv", y_conv)
with tf.name_scope("cross-entropy") as scope:
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_conv, y_, name="cross_entropy_1"))
ce_summ = tf.scalar_summary("cross entropy", cross_entropy, name="cross_entropy")
optimizer = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy, name="min_adam_1")
with tf.name_scope("prediction") as scope:
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
accuracy_summary = tf.scalar_summary("accuracy", accuracy, name="accuracy_summary")
merged = tf.merge_all_summaries()
Loading and running new data
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('./simple_model/one-layer-50.meta')
new_saver.restore(sess, './simple_model/one-layer-50')
logger.info("Model restored")
image, _ = tf_nn.reformat(images, None, 3)
x_image = tf.placeholder(tf.float32, shape=[image.shape[0], 28, 28, 1],
name="data")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
feed_dict = {x_image: image, keep_prob: .01}
y_ = tf.get_collection("y_")
prediction = sess.run(y_, feed_dict=feed_dict)
Upvotes: 4
Views: 4852
Reputation: 2663
You can define your placeholders to have flexible size on one of the dimensions by using None
instead of a specific number like this:
x = tf.placeholder(tf.float32, shape=[None, self.image_height, self.image_width, 1], name="data")
y_ = tf.placeholder(tf.float32, shape=[None, num_labels], name="labels")
Edit: There's a section in the TensorFlow faq about this.
Upvotes: 7
Reputation: 164
My approach would have been to define batch_size as a tf.variable and then feed the value of the batchsize you want to use when running the session. This has worked fine for me in the past, but I guess the solution by Stryke would be more elegant.
Upvotes: 2