K-Dawg
K-Dawg

Reputation: 3299

Keras output shape has an extra dimension

I have a neural network that takes in an RGB colour image of 500px by 500px and will also output another image of the same dimensions.

Here's the structure of my network:

Generative_Model = Sequential([ 

   Conv2D(32, (6, 6), padding="same", name="generative", input_shape=(500,500, 3), data_format="channels_last")  

   PReLU(alpha_initializer='zeros'), 

   Conv2D(3, (3, 3), padding="same"), 
   PReLU(alpha_initializer='zeros', name="outp1"), 

])

The problem I'm having is that the dimensions coming out are [None, 500, 500, 3] though I was expecting them to be [500, 500, 3]. I'm not sure where the extra dimension is coming from.

It's important that the dimensions are removed before leaving the Network as this feeds into a second adversarial network.

Here's what I get when I print model.summary():

output from model.summary()

I've tried adding a reshape at the end to force the network to drop the last dimension but it doesn't appear to be working as the output shape appears to remain the same.

Upvotes: 1

Views: 964

Answers (1)

K-Dawg
K-Dawg

Reputation: 3299

Whilst speaking to @Dodge in chat he pointed me to the following docs:

https://www.tensorflow.org/api_docs/python/tf/keras/layers/Reshape

which states that the additional None comes from the batch length. I needed to feed the output of the first network into the output of a second which expected not to have the batch dimension so I removed this using a reshape outside of the first network like so:

#Adversierial network which is comprised of a generator network and a discriminator network.
self.model = Sequential([
   Gen_Input, # Generator Network
   Reshape((500, 500, 3), input_shape=(500, 500, 3)),
   discriminative_model.Input # Discriminator Network
        ])

This allowed me to reshape the output from inside the graph.

Upvotes: 1

Related Questions