Reputation: 41
In tensorflow tutorial for word embedding one finds:
# Placeholders for inputs
train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
What is possibly the difference between these two placeholders. Aren't they both a int32 column vector of size batch_size?
Thanks.
Upvotes: 2
Views: 488
Reputation: 41
I found the answer with a little debugging.
[batch_size] = [ 0, 2, ...]
[batch_size, 1] = [ [0], [2], ...]
Though still don't know why using the second form.
Upvotes: 2