Reputation: 6497
I have a function get_image(...)
that performs preprocessing on my input images. I gather all images that belong to the same batch in a list like this:
batch = [get_image(file_path) for file_path in batch_files]
Now I want to convert this list into one single tensor with the first dimension being the batch size dimension, such that I could feed it to the input placeholder of my network.
_ = self.sess.run([loss],feed_dict={ input_placeholder: batch })
Any idea how I could do that?
batch_concat = tf.placeholder(shape=[None] + self.image_shape, dtype=tf.float32)
for i in xrange(0,self.batch_size):
if i == 0:
tmp_batch = tf.expand_dims(batch[i], 0)
batch_concat = tmp_batch
else:
tmp_batch = tf.expand_dims(batch[i], 0)
batch_concat = tf.concat(0, [batch_concat, tmp_batch])
When I try to concatenate all tensors, I get the following error:
TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays.
So maybe it would be enough to convert the tensor back into a numpy array before feeding it to the network?
Upvotes: 2
Views: 8051
Reputation: 480
The issue here is using a tensor as a value in feed_dict. Instead of feeding batch
as the value for input_placeholder, why don't you use batch
instead of input_placeholder
, assuming batch
is your batched tensor?
So, instead of:
input_placeholder = tf.Placeholder(tf.int32)
loss = some_function(input_placeholder)
sess.run(loss, feed_dict={input_placeholder: batch})
Do:
loss = some_function(batch)
sess.run(batch)
Upvotes: 0