Whisht
Whisht

Reputation: 753

How to change different torch model last layer for finetuning in a unified way?

I have a torch model that receive a pretrained model and alter the last module of pretrained model for finetuning. But a realistic situation is the last module of these pretrained model have different name, for instance, torchvision.models.resnet*() is fc, timm.models.vit*() is head, some other may contain several output layer comprising the last module.

What I want is that I can alter these different layers in a unified way. Such as

last_model = get_last_module(pretrained_model)
last_model = nn.Sequential(...)

. But in this way, the pretrained_model do not changed at all.

For example, what I do in a stupid way:

    def __init__(self, base_encoder, pred_dim=512):
        """
        dim: feature dimension (default: 2048)
        pred_dim: hidden dimension of the predictor (default: 512)
        """
        super().__init__()

        # create the encoder
        # num_classes is the output fc dimension, zero-initialize last BNs
        self.encoder = base_encoder
        if isinstance(self.encoder, torchvision.models.ResNet):
            dim = self.encoder.fc.out_features
            # build a 3-layer projector
            prev_dim = self.encoder.fc.in_features
            self.encoder.fc = nn.Sequential(
                nn.Linear(prev_dim, prev_dim, bias=False),
                nn.BatchNorm1d(prev_dim),
                nn.ReLU(inplace=True),  # first layer
                nn.Linear(prev_dim, prev_dim, bias=False),
                nn.BatchNorm1d(prev_dim),
                nn.ReLU(inplace=True),  # second layer
                # nn.Linear(prev_dim, dim),
                self.encoder.fc,
                nn.BatchNorm1d(dim, affine=False),
            )  # output layer
            self.encoder.fc[
                6
            ].bias.requires_grad = False  # hack: not use bias as it is followed by BN

        elif isinstance(self.encoder, timm.models.VisionTransformer):
            dim = self.encoder.head.out_features
            # build a 3-layer projector
            prev_dim = self.encoder.head.in_features
            self.encoder.head = nn.Sequential(
                nn.Linear(prev_dim, prev_dim, bias=False),
                nn.BatchNorm1d(prev_dim),
                nn.ReLU(inplace=True),  # first layer
                nn.Linear(prev_dim, prev_dim, bias=False),
                nn.BatchNorm1d(prev_dim),
                nn.ReLU(inplace=True),  # second layer
                # nn.Linear(prev_dim, dim),
                self.encoder.head,
                nn.BatchNorm1d(dim, affine=False),
            )  # output layer
            self.encoder.head[
                6
            ].bias.requires_grad = False  # hack: not use bias as it is followed by BN

            # build a 2-layer predictor
        self.predictor = nn.Sequential(
            nn.Linear(dim, pred_dim, bias=False),
            nn.BatchNorm1d(pred_dim),
            nn.ReLU(inplace=True),  # hidden layer
            nn.Linear(pred_dim, dim),
        )

Upvotes: 0

Views: 1920

Answers (1)

bpfrd
bpfrd

Reputation: 1025

How about using children?

pretrained_model_except_last_layer = list(pretrained_model.children())[:-1]
new_model = nn.Sequential(*pretrained_model_except_last_layer, your_new_classfier)

Upvotes: 1

Related Questions