Reputation: 759
I have built a deep CNN based on a research paper and now I am attempting to train it. After all the convolutions and deconvolutions I have performed, I have a result called final.
final = tf.add(add1,add2)
print(final)
Tensor("Add_35:0", shape=(1, 32, 32, 7, 1), dtype=float32)
In my model, I have an image of the size 32x32x7 where each pixel has a corresponding density. The output of the model will be a label for each pixel. Therefore, I have declared two placeholders where "x" represents the input and "y_" represents the output.
x = tf.placeholder(tf.float32, shape=[None, 7168])
y_ = tf.placeholder(tf.float32, shape=[None, 7168])
Now that I am attempting to train the model, I have this line
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=final))
When the model is being trained I receive the error : logits and labels must be same size: logits_size=[7168,1] labels_size=[1,7168] It makes sense that labels would be this size since this is how I declared it. However, I do not understand why logits is of size [7168,1] when printing out "final" has the shape (1, 32, 32, 7, 1).
Upvotes: 1
Views: 1436
Reputation: 12908
Just tf.reshape
your final
:
final = tf.reshape(final, [None, 7168])
Although I am not sure why it is automatically flattening when you call softmax_cross_entropy_with_logits
...
Upvotes: 1