Reputation: 759
I am attempting to replicate a deep neural network from a research paper. The architecture can be found here:
I have completed designing the model, and now I am attempting to prepare training data. I have been using the tensorflow tutorials found here as a guide: https://www.tensorflow.org/get_started/mnist/pros
In the case of the mnist data, a 27x27 image is converted to a 1d vector for x. On the other hand, y_ has the shape [none, 10] because each image has the possibility to labeled 10 different ways (0-9)
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
My data is a 32x32x7 3d image so x is easy to calculate.
x = tf.placeholder(tf.float32, shape=[None, 7168])
Although my image is 32x32x7, each pixel has a density and label associated with it. I believe the density values will be loaded into x and the labels would be loaded into y. Is this a correct assumption or should I be loading my data in a different way?
y_ = tf.placeholder(tf.float32, shape=[None, 7168])
Upvotes: 1
Views: 525
Reputation: 34288
my image is 32x32x7, each pixel has a density and label associated with it
If so, then the output of the network, and the target y_
, would be of shape:
[
None, # Batch size
32 * 32 * 7, # Vector size
N # N target labels (one hot encoded)
]
Upvotes: 1