HuckleberryFinn
HuckleberryFinn

Reputation: 1529

Tensorflow 2.0 Keras Model subclassing

I'm trying to implement a simple UNet-like model using the model subclassing method. Here's my code:

import tensorflow as tf 
from tensorflow import keras as K

class Enc_block(K.layers.Layer):
    def __init__(self, in_dim):
        super(Enc_block, self).__init__()
        self.conv_layer =  K.layers.SeparableConv2D(in_dim,3, padding='same', activation='relu')
        self.batchnorm_layer = K.layers.BatchNormalization()
        self.pool_layer = K.layers.SeparableConv2D(in_dim,3, padding='same',strides=2, activation='relu')

    def call(self, x):
        x = self.conv_layer(x)
        x = self.batchnorm_layer(x)
        x = self.conv_layer(x)
        x = self.batchnorm_layer(x)
        return self.pool_layer(x), x


class Dec_block(K.layers.Layer):
    def __init__(self, in_dim):
        super(Dec_block, self).__init__()
        self.conv_layer =  K.layers.SeparableConv2D(in_dim,3, padding='same', activation='relu')
        self.batchnorm_layer = K.layers.BatchNormalization()

    def call(self, x):
        x = self.conv_layer(x)
        x = self.batchnorm_layer(x)
        x = self.conv_layer(x)
        x = self.batchnorm_layer(x)
        return x

class Bottleneck(K.layers.Layer):
    def __init__(self, in_dim):
        super(Bottleneck, self).__init__()
        self.conv_1layer =  K.layers.SeparableConv2D(in_dim,1, padding='same', activation='relu')
        self.conv_3layer =  K.layers.SeparableConv2D(in_dim,3, padding='same', activation='relu')
        self.batchnorm_layer = K.layers.BatchNormalization()

    def call(self, x):
        x = self.conv_1layer(x)
        x = self.batchnorm_layer(x)
        x = self.conv_3layer(x)
        x = self.batchnorm_layer(x)
        return x

class Output_block(K.layers.Layer):
    def __init__(self, in_dim):
        super(Output_block, self).__init__()
        self.logits = K.layers.SeparableConv2D(in_dim,3, padding='same', activation=None)
        self.out = K.layers.Softmax()

    def call(self, x):
        x_logits = self.logits(x)
        x = self.out(x_logits)
        return x_logits, x

class UNetModel(K.Model):
    def __init__(self,in_dim):
        super(UNetModel, self).__init__()
        self.encoder_block = Enc_block(in_dim)
        self.bottleneck = Bottleneck(in_dim)
        self.decoder_block = Dec_block(in_dim)
        self.output_block = Output_block(in_dim)


    def call(self, inputs, training=None):
        x, x_skip1 = self.encoder_block(32)(inputs)
        x, x_skip2 = self.encoder_block(64)(x)
        x, x_skip3 = self.encoder_block(128)(x)
        x, x_skip4 = self.encoder_block(256)(x)
        x = self.bottleneck(x)
        x = K.layers.UpSampling2D(size=(2,2))(x)
        x = K.layers.concatenate([x,x_skip4],axis=-1)
        x = self.decoder_block(256)(x)
        x = K.layers.UpSampling2D(size=(2,2))(x) #56x56
        x = K.layers.concatenate([x,x_skip3],axis=-1)
        x = self.decoder_block(128)(x)
        x = K.layers.UpSampling2D(size=(2,2))(x) #112x112
        x = K.layers.concatenate([x,x_skip2],axis=-1)
        x = self.decoder_block(64)(x)
        x = K.layers.UpSampling2D(size=(2,2))(x) #224x224
        x = K.layers.concatenate([x,x_skip1],axis=-1)
        x = self.decoder_block(32)(x)
        x_logits, x = self.output_block(2)(x)
        return x_logits, x

I am getting the following error:

ValueError: Input 0 of layer separable_conv2d is incompatible with the layer: expected ndim=4, found ndim=0. Full shape received: []

I'm not sure if this is the correct way to implement a network in tf.keras The idea was to implement encoder and decoder blocks by subclassing keras layers and subclassing the Model later.

Upvotes: 4

Views: 2745

Answers (1)

Vlad
Vlad

Reputation: 8585

Take a look at this line from UNetModel class:

x, x_skip1 = self.encoder_block(32)(inputs)

where self.encoder_block() is defined by

self.encoder_block = Enc_block(in_dim)

encoder_block is an instance of class. By doing self.encoder_block(32) you are invoking a __call__() method of the End_block class which expect to receive an iterable of image inputs of rank=4. Instead you're passing an integer number 32 of rank=0 and you get ValueError which says exactly what I've just explained: expected ndim=4, found ndim=0. What probably you intended to do is:

x, x_skip1 = self.encoder_block(inputs)

You repeat the same mistake in the subsequent lines as well. There are additional errors where you define the same in_dim for every custom layer:

self.encoder_block = Enc_block(in_dim)
self.bottleneck = Bottleneck(in_dim)
self.decoder_block = Dec_block(in_dim)
self.output_block = Output_block(in_dim)

The input shape for Bottleneck layer should be the same shape as output of the Enc_Block layer and so one. I suggest you first to understand simple example before you're trying to implement more complicated ones. Take a look at this example. It has two custom layers:

import tensorflow as tf
import numpy as np
from tensorflow.keras import layers

class CustomLayer1(layers.Layer):
    def __init__(self, outshape=4):
        super(CustomLayer1, self).__init__()
        self.outshape = outshape
    def build(self, input_shape):
        self.kernel = self.add_weight(name='kernel',
                                      shape=(int(input_shape[1]), self.outshape),
                                      trainable=True)
        super(CustomLayer1, self).build(input_shape)

    def call(self, inputs):
        return tf.matmul(inputs, self.kernel)

class CustomLayer2(layers.Layer):
    def __init__(self):
        super(CustomLayer2, self).__init__()

    def call(self, inputs):
        return inputs / tf.reshape(tf.reduce_sum(inputs, 1), (-1, 1))

Now I will use both of these layers in the new CombinedLayers class:

class CombinedLayers(layers.Layer):
    def __init__(self, units=3):
        super(CombinedLayers, self).__init__()
        # `units` defines a number of units in the layer. It is the
        # output shape of the `CustomLayer`
        self.layer1 = CustomLayer1(units) 
        # The input shape is inferred dynamically in the `build()`
        # method of the `CustomLayer1` class
        self.layer2 = CustomLayer1(units)
        # Some layers such as this one do not need to know the shape
        self.layer3 = CustomLayer2()

    def call(self, inputs):
        x = self.layer1(inputs)
        x = self.layer2(x)
        x = self.layer3(x)
        return x

Note that the input shape of CustomLayer1 is inferred dynamically in the build() method. Now let's test it with some input:

x_train = [np.random.normal(size=(3, )) for _ in range(5)]
x_train_tensor = tf.convert_to_tensor(x_train)

combined = CombinedLayers(3)

result = combined(x_train_tensor)
result.numpy()
# array([[  0.50822063,  -0.0800476 ,   0.57182697],
#        [ -0.76052217,   0.50127872,   1.25924345],
#        [-19.5887986 ,   9.23529798,  11.35350062],
#        [ -0.33696137,   0.22741248,   1.10954888],
#        [  0.53079047,  -0.08941536,   0.55862488]])

This is how you should approach it. Create layers one by one. Each time you add a new layer test everything with some input to verify that you are doing things correctly.

Upvotes: 3

Related Questions