Ruijian
Ruijian

Reputation: 11

Do I need to define the backward() in custom loss function?

I have already to define my own loss function. It does work. The feedforward may not have problem. But I am not sure whether it is correct because I don't define the backward().

class _Loss(nn.Module):
    def __init__(self, size_average=True):
        super(_Loss, self).__init__()
        self.size_average = size_average
class MyLoss(_Loss):
    def forward(self, input, target):
        loss = 0
        weight = np.zeros((BATCH_SIZE,BATCH_SIZE))
        for a in range(BATCH_SIZE):
            for b in range(BATCH_SIZE):
                weight[a][b] = get_weight(target.data[a][0])
        for i in range(BATCH_SIZE):
            for j in range(BATCH_SIZE):
                a_ij= (input[i]-input[j]-target[i]+target[j])*weight[i,j]
                loss += F.relu(a_ij)
        return loss

The question I want to ask is that

1) Do I need to define the backward() to loss function?

2) How to define the backward()?

3) Is there are any way to do the index of the data while doing SGD in torch?

Upvotes: 1

Views: 2158

Answers (1)

Vishnu Subramanian
Vishnu Subramanian

Reputation: 664

You can write a loss function like below.

def mse_loss(input, target):
            return ((input - target) ** 2).sum() / input.data.nelement() 

You do not need to implement backward function. All the above parameters of the loss functions should be PyTorch variables and the rest is taken care by torch.autograd function.

Upvotes: 2

Related Questions