Hossein
Hossein

Reputation: 2111

Why use None for the batch dimension in tensorflow?

In the following code, the None is used to declare the size of the placeholders.

x_data = tf.placeholder(tf.int32, [None, max_sequence_length]) 
y_output = tf.placeholder(tf.int32, [None])

As I know, this None is used to specify a variable batch dimension. But, in each code, we have a variable that shows the batch size, such as:

batch_size = 250

So, is there any reason to use None in such cases instead of simply declaring the placeholders as?

x_data = tf.placeholder(tf.int32, [batch_size, max_sequence_length]) 
y_output = tf.placeholder(tf.int32, [batch_size])

Upvotes: 6

Views: 2132

Answers (1)

Imanol Luengo
Imanol Luengo

Reputation: 15889

It is just so that the input of the network doesn't get bounded to a fixed-sized batches, and you can later reuse the learnt network to predict either single instances or arbitrarily long batches (e.g. predict all your test samples at once).

In other words, it doesn't do much during training, as batches are usually of a fixed size during tranning anyway, but it makes the network more useful when testing.

Upvotes: 5

Related Questions