Bernardo Cortez
Bernardo Cortez

Reputation: 53

Transfer Learning - How can I change only the output layer in TensorFlow?

I am trying to apply one idea proposed by Rusu et al. in https://arxiv.org/pdf/1511.06295.pdf, which consists in training a NN changing the output layer according to the class of the input, i.e., provided that we know the id of the input, we would pick the corresponding output layer. This way, all the hidden layers would be trained with all the data, but each output layer would only be trained with its corresponding type of input data.

This is meant to achieve good results in a transfer learning framework.

How can I implement this "change of the last layer" in tensorflow 2.0?

Upvotes: 2

Views: 650

Answers (1)

Kh4zit
Kh4zit

Reputation: 2865

If you use model subclassing, you can actually define you forward pass.

class MyModel(tf.keras.Model):

    def __init__(self):
        super(Model, self).__init__()
        self.block_1 = BlockA()
        self.block_2 = BlockB()
        self.global_pool = layers.GlobalAveragePooling2D()
        self.classifier = Dense(num_classes)

    def call(self, inputs):
        if condition:
            x = self.block_1(inputs)
        else:
            x = self.block_2(inputs)
        x = self.global_pool(x)
        return self.classifier(x)

You'll still have the backprop part to figure out, but I think it's fairly easy if you use a multioutput model and train all your "last layers" at the same time.

Upvotes: 1

Related Questions