AkiRoss
AkiRoss

Reputation: 12282

Working without TensorFlow placeholder

Until recently, all my code in TF used tf.placeholder to represent the input. It is very convenient, because it allows me to feed a batch of arbitrary length to my network, allowing to use the same code in different places (training, testing, prediction...)

After finding out that feed_dict are so slow, I wanted to change and started using pipelines, which basically use tf.Variables instead of placeholders: each variable is a fixed length tensor, and represents a batch that is used as input of the network.

My problem is that, while placeholders were "untied" and you had to feed data to it, pipelines are bound to input data. So, for example, once I setup my pipeline to use the training data in batches of size 10, I cannot use data from the testing set in batches of, say, 12 examples.

Or can I?

What is the proper way of working without placeholders?

Upvotes: 2

Views: 1354

Answers (2)

Jongju Shin
Jongju Shin

Reputation: 21

cifar10 example doesn't use placeholder. It uses tf.FixedLengthRecordReader and tf.train.shuffle_batch. Generated input image batch is directly passed to CNN without placeholder.

Please refer tensorflow's official tutorial and it's python code:

https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_train.py

Also, at the test time, you can link the pipeline of the test data with different number of batch from the train data.

Please refer the evaluation code: https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_eval.py

Upvotes: 1

AkiRoss
AkiRoss

Reputation: 12282

I could not find any other method for working without placeholders than having fixed size variables.

Apparently, there is no other recommended way to use pipelines than the one suggested by the official documentation, reading data.

Upvotes: 0

Related Questions