Christian__
Christian__

Reputation: 197

Is there a way to overide the backward operation on nn.Module

I am looking for a nice way of overriding the backward operation in nn.Module for example:

class LayerWithCustomGrad(nn.Module):
    def __init__(self):
        super(LayerWithCustomGrad, self).__init__()
        self.weights = nn.Parameter(torch.randn(200))

    def forward(self,x):
        return x * self.weights


    def backward(self,grad_of_c): # This gets called during loss.backward()
        # grad_of_c comes from the gradient of b*23
        grad_of_a = some_operation(grad_of_c)

        # perform extra computation
        # and more computation

        self.weights.grad = another_operation(grad_of_a,grad_of_c)
        return grad_of_a # and the grad of parameter "a" will receive this


layer = LayerWithCustomGrad()

a = nn.Parameter(torch.randn(200),requires_grad=True)
b = layer(a)
c = b*23

Some of the projects I work on contains layers with non differentiable functions, I will love it if there is some how away to connect two broken graphs and/or modify gradients of graphs that already exist.

It will also be great if there is a possible method of doing it in tensor flow

Upvotes: 6

Views: 5049

Answers (1)

Ivan
Ivan

Reputation: 40628

The way PyTorch is built you should first implement a custom torch.autograd.Function which will contain the forward and backward pass for your layer. Then you can create a nn.Module to wrap this function with the necessary parameters.

In this tutorial page you can see the ReLU being implemented. I will show here are to build a torch.autograd.Function and its nn.Module wrapper.

class F(torch.autograd.Function):
    """Both forward and backward are static methods."""

    @staticmethod
    def forward(ctx, input, weights):
        """
        In the forward pass we receive a Tensor containing the input and return
        a Tensor containing the output. ctx is a context object that can be used
        to stash information for backward computation. You can cache arbitrary
        objects for use in the backward pass using the ctx.save_for_backward method.
        """
        ctx.save_for_backward(input, weights)
        return input*weights

    @staticmethod
    def backward(ctx, grad_output):
        """
        In the backward pass we receive a Tensor containing the gradient of the loss
        with respect to the output, and we need to compute the gradient of the loss
        with respect to the inputs: here input and weights
        """
        input, weights = ctx.saved_tensors
        grad_input = weights.clone()*grad_output
        grad_weights = input.clone()*grad_output
        return grad_input, grad_weights

The nn.Module will initialize the parameters and call F to handle the actual operation computation for forward/backward pass.

class LayerWithCustomGrad(nn.Module):
    def __init__(self):
        super().__init__()
        self.weights = nn.Parameter(torch.rand(10))
        self.fn = F.apply

    def forward(self, x):
        return self.fn(x, self.weights)

Now we can try to infer and backpropagate:

>>> layer = LayerWithCustomGrad()
>>> x = torch.randn(10, requires_grad=True)
>>> y = layer(x)
tensor([ 0.2023,  0.7176,  0.3577, -1.3573,  1.5185,  0.0632,  0.1210,  0.1566,
         0.0709, -0.4324], grad_fn=<FBackward>)

Notice the <FBackward> as grad_fn: this is the backward function of F bound to the previous inference we made with x.

>>> y.mean().backward()

>>> x.grad # i.e. grad_input in F.backward
tensor([0.0141, 0.0852, 0.0450, 0.0922, 0.0400, 0.0988, 0.0762, 0.0227, 0.0569,
        0.0309])

>>> layer.weights.grad # i.e. grad_weights in F.backward
tensor([-1.4584, -2.1187,  1.5991,  0.9764,  1.8956, -1.0993, -3.7835, -0.4926,
         0.9477, -1.2219])

Upvotes: 7

Related Questions