JONHYOJIN
JONHYOJIN

Reputation: 1

F.mse_loss & nn.MSELoss return different MSE value for each test, if batch size is over 1

I'm using the below loss function that combines mean squared loss and cross entropy loss.

class Custom_Loss(nn.Module):
    def __init__(self, device='cpu'):
        super(Custom_Loss, self).__init__()
        self.device = device

    def forward(self, pred_probs, true_class, pred_points, true_points):
        """
            pred_probs: [B x n_classes]
            true_classes: [B x 1]
            pred_points: [B x 1]
            pred_points: [B x 1]
        """
        # MSE = nn.MSELoss()
        # CE = nn.CrossEntropyLoss()
        batch_size = pred_probs.size(0)
        mse = F.mse_loss(pred_points, true_points, reduction='sum').to(self.device) / batch_size
        ce = F.cross_entropy(pred.unsqueeze(0), true_class[index].unsqueeze(0))

        loss = mse + ce

        return loss, mse, ce

While I use this loss function to test the already tested model, it returns different mse value for each test if batch size is over 1. If the batch size is 1, it returns same mse value every test.

My questions are these:

While testing the model, If I set the batch size as 1, I always have same MSE value as a result. But, if I set the batch size over 1, I alway have different MSE value for each test. So, I think there is some special offset or normalization process in the loss function.

Upvotes: 0

Views: 834

Answers (0)

Related Questions