beginner
beginner

Reputation: 468

TensorFlow placeholder dimension - what's the difference?

New to TensorFlow but am stuck with this placeholder declaration question. What exactly is the difference between defining a placeholder x as:

x = tf.placeholder(tf.float32, [None, seq_size])

as opposed to this?

x = tf.placeholder(tf.float32, [None, seq_size, 1])

I'm thinking in terms of matrices. So say, suppose variable x is fed 10 values, and seq_size is 3 - the first gives 10x3 and the second gives a 10x3x1. Why would tensorflow consider them differently?

Upvotes: 1

Views: 739

Answers (2)

Naveen Honest Raj K
Naveen Honest Raj K

Reputation: 362

x = tf.placeholder(tf.float32, [None, seq_size, 1])

Here 'x' is a placeholder to hold tensor of size [anything, seq_size, 1]. This holds good with matrix operations where some multi-dimensional operations can be carried out easily by up-casing them to higher dimensional matrix.

P.S: No. of elements in array of shape [None, seq_size] & [None, seq_size, 1] can be same. They can be reshaped to each other easily.

Upvotes: 0

Thomas Moreau
Thomas Moreau

Reputation: 4467

Tensorflow will consider it differently for shape validation purpose. The matrix multiplication for instance with a matrix of size 3x4 will not be possible with the second version as the dimensions 1 and 3 do not match. Tensorflow will be able to detect that at graph construction.

Also, on the code readability side, it may be good for general understanding to have extra dimension 1 if this dimension might be changed in the future. For instance if your data points are univariate time series, using

x = tf.placeholder(tf.float32, [None, seq_size, 1])

will make it easier to extend your result to multivariate time series with dimension d>1 with

x = tf.placeholder(tf.float32, [None, seq_size, d])

as all the code already have this extra dimension accounted for.

Upvotes: 2

Related Questions