shamalaia
shamalaia

Reputation: 2347

tensorflow 2 from Sequential to Functional

I have the following autoencoder model defined using the Sequential API:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import Conv2D, Input, Dense, Lambda, Reshape, Activation, Dropout
    def test_seq(X_train, epochs):
        model = keras.Sequential()
        #encoder
        model.add(Conv2D(64, kernel_size=(6,2),\
                          activation='relu',\
                          padding='same',
                          dilation_rate=1,\
                          strides=1,\
                          input_shape=(X_train.shape[1], X_train.shape[2], 1)))
        model.add(Dropout(0.1))
    
        # Latent space
        model.add(Dense(2))
    
        #decoder
        model.add(Conv2D(64, kernel_size=(6,2),\
                          activation='relu',\
                          padding='same',
                          dilation_rate=1,\
                          strides=1))
        model.add(Dropout(0.1))
        
        model.add(Conv2D(1, kernel_size=(6,2),\
                          activation='relu',\
                          padding='same'))
    
                  
        model.compile(loss='mse', optimizer='adam', metrics=['mae', 'mape'])
        print(model.summary())
        print()
        
        print('TRAINING...')
        history = model.fit(X_train, X_train, epochs=epochs, verbose=0)
        print('DONE !!')
    
        return model, history

I want to rewrite it using the Functional API to be able to return the encoder and decoder separately. My attempt is:

def test_fun(X_train, epochs, latent_dim):
    import tensorflow as tf
    from tensorflow.keras.layers import Conv2D, Input, Dense, Lambda, Reshape
    from tensorflow.keras.layers import Dropout, BatchNormalization
    from tensorflow.keras.models import Model
    from tensorflow.keras.losses import mean_squared_error
    import keras.backend as K 
 
    
    ### Encoder
    e_i     = Input(shape=(X_train.shape[1], X_train.shape[2], 1), name='enc_input') # input layer
    cx      = Conv2D(64, kernel_size=(6,2),\
                         activation='relu',\
                         padding='same',
                         dilation_rate=1,\
                         strides=1,\
                         input_shape=(X_train.shape[1],X_train.shape[2],1), name='enc_c1')(e_i)
    cx      = Dropout(0.1)(cx)

    
    ### Latent space
    x       = Dense(latent_dim, name='latent_space')(cx)
    
    
    ### Instantiate encoder
    encoder = Model(e_i, x, name='encoder')
    print(encoder.summary())


    ### Decoder
    d_i   = Input(shape=(latent_dim, 1), name='decoder_input')
    cx    = Conv2D(64, kernel_size=(6,2),\
                       activation='relu',\
                       padding='same',
                       dilation_rate=1,\
                       strides=1, name='dec_c1')(d_i)
    cx    = Dropout(0.1)(cx) 
    
    o     = Conv2D(1, kernel_size=(6,2),\
                      activation='relu',\
                      padding='same',\
                      name='decoder_output')(cx)

    # Decoder instantiation
    decoder = Model(d_i, o, name='decoder')
    print(decoder.summary())


    # Instantiate AE
    ae_outputs = decoder(encoder(e_i))
    ae         = Model(e_i, ae_outputs, name='ae')


    # Compile AE
    ae.compile(optimizer='adam', loss='mse', metrics=['mae', 'mape'],\
                experimental_run_tf_function=False)

    # Train autoencoder
    history = ae.fit(X_train, X_train, epochs=epochs, verbose=0)

    return encoder, decoder, ae, history

However, this returns an input size error for the decoder:

Why does it expect ndim=4?

On a second thought, how can the Seauential model work without padding='valid' in the first layer of the decoder. Btw, I receive the same error even with padding='valid'.

Upvotes: 0

Views: 71

Answers (1)

sai nikhit
sai nikhit

Reputation: 24

I think the problem is not with the padding. According to keras documentation :

Conv2D layer takes an input of 4-dim i.e (batch_size, rows, cols, channels)

But the input you have given to layer "dec_c1" is of shape (latent_dim, 1) so I would suggest you to change the shape of Input layer "decoder_input" to a 3-dim format something like this

d_i = Input(shape=(latent_dim, 1, 1), name='decoder_input')

Upvotes: 1

Related Questions