zanini
zanini

Reputation: 1613

L1 norm as regularizer in Pytorch

I need to add an L1 norm as a regularizer to create a sparsity condition in my neural network. I would like to train my network for classification. I tried to construct an L1 norm by myself, like here, but it didn't work.

I need to add the regularizer after ConvTranspose2d, something like this Keras example:

model.add(Dense(64, input_dim=64,
            kernel_regularizer=regularizers.l2(0.01),
            activity_regularizer=regularizers.l1(0.01)))

But my network was created in PyTorch like so:

upconv = nn.ConvTranspose2d(inner_nc, outer_nc,
                            kernel_size=4, stride=2,
                            padding=1, bias=use_bias)
down = [downrelu, downconv]
up = [uprelu, upconv, upnorm]
model = down + up

Upvotes: 1

Views: 4133

Answers (2)

mbpaulus
mbpaulus

Reputation: 7691

You're overthinking this. As I see from your Keras code, you're trying to impose a L1 penalty on the activations of your layer. The simplest way would just be to do something like the following:

activations_to_regularise = upconv(input)
output = remaining_netowrk(activations_to_regularise)

Then have your normal loss function to assess the output against a target and also incorporate the L1 loss into the objective, such that you get

total_loss = criterion(output, target) + 0.01 * activations_to_regularise.abs()

Upvotes: 1

gngdb
gngdb

Reputation: 494

In pytorch, you can do the following (assuming your network is called net):

def l1_loss(x):
    return torch.abs(x).sum()

to_regularise = []
for param in net.parameters():
    to_regularise.append(param.view(-1))
l1 = l1_weight*l1_loss(torch.cat(to_regularise))

Upvotes: 1

Related Questions