morepenguins
morepenguins

Reputation: 1287

Dimensions Don't Match for Decoder in Tensorflow Tutorial

I am following the Convolutional Autoencoder tutorial for tensorflow, using tensorflow 2.0 and keras, found here.

Using the provided code for building a CNN, but adding one more convolutional layer to both the encoder and decoder causes the code to break:

class Denoise(Model):
  def __init__(self):
    super(Denoise, self).__init__()
    self.encoder = tf.keras.Sequential([
      layers.Input(shape=(28, 28, 1)), 
      layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2),
      layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2),
      ## New Layer ##
      layers.Conv2D(4, (3,3), activation='relu', padding='same', strides=2)
      ## --------- ##
      ])

    self.decoder = tf.keras.Sequential([
      ## New Layer ##
      layers.Conv2DTranspose(4, kernel_size=3, strides=2, activation='relu', padding='same'),
      ## --------- ##
      layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same')
      ])

  def call(self, x):
    encoded = self.encoder(x)
    decoded = self.decoder(encoded)
    return decoded

autoencoder = Denoise()

Running autoencoder.encoder.summary() and autoencoder.decoder.summary(), I can see this is a shape issue:

Encoder:
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_124 (Conv2D)          (None, 14, 14, 16)        160       
_________________________________________________________________
conv2d_125 (Conv2D)          (None, 7, 7, 8)           1160      
_________________________________________________________________
conv2d_126 (Conv2D)          (None, 4, 4, 4)           292       
=================================================================
Total params: 1,612
Trainable params: 1,612
Non-trainable params: 0
_________________________________________________________________

Decoder:
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_transpose_77 (Conv2DT (32, 8, 8, 4)             148       
_________________________________________________________________
conv2d_transpose_78 (Conv2DT (32, 16, 16, 8)           296       
_________________________________________________________________
conv2d_transpose_79 (Conv2DT (32, 32, 32, 16)          1168      
_________________________________________________________________
conv2d_127 (Conv2D)          (32, 32, 32, 1)           145       
=================================================================
Total params: 1,757
Trainable params: 1,757
Non-trainable params: 0
_________________________________________________________________

Why is the leading dimension on the decoding side 32? Why wouldn't the dimensions of the incoming layer be None, 4, 4, 4 if the inputs are passed from the encoder? How do I fix this?

Thank you in advance for your help with this!

Upvotes: 1

Views: 551

Answers (2)

Nicolas Gervais
Nicolas Gervais

Reputation: 36714

Remove stride=2 in your last encoder layer, and add stride=2 in your last decoder layer.

from tensorflow.keras import layers
from tensorflow.keras import Model

class Denoise(Model):
  def __init__(self):
    super(Denoise, self).__init__()
    self.encoder = tf.keras.Sequential([
      layers.Input(shape=(28, 28, 1)), 
      layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2),
      layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2),
      ## New Layer ##
      layers.Conv2D(4, (3,3), activation='relu', padding='same')
      ## --------- ##
      ])

    self.decoder = tf.keras.Sequential([
      ## New Layer ##
      layers.Conv2DTranspose(4, kernel_size=3, strides=2, activation='relu', padding='same'),
      ## --------- ##
      layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same', strides=2)
      ])

  def call(self, x):
    encoded = self.encoder(x)
    decoded = self.decoder(encoded)
    return decoded

autoencoder = Denoise()
autoencoder.build(input_shape=(1, 28, 28, 1))
autoencoder.summary()

Upvotes: 1

thushv89
thushv89

Reputation: 11343

Keras uses 32 as the default batch_size. It might have something to do with this. But to fix this problem, you can include the input_shape argument in your decoder.

self.decoder = tf.keras.models.Sequential([
      ## New Layer ##
      layers.Conv2DTranspose(4, kernel_size=3, strides=2, activation='relu', padding='same', input_shape=(4,4,4)),
      ## --------- ##
      layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
      layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same')
      ])

Also, to avoid potential issues and consistency, I'd use tf.keras.models module for models (than tf.keras.Model) and tf.keras.layers for layers. It might not be a problem .. but could cause issues.

Upvotes: 1

Related Questions