Aditya Kansal
Aditya Kansal

Reputation: 171

Why PyTorch BatchNorm1D gives "batch_norm" not implemented for 'Long'" error while normalizing Integer type tensor?

I am trying to learn some functions in Pytorch framework and was stuck due to below error while normalizing a simple integer tensor. Could someone please help me with this.

Here is the sample code to reproduce the error -

import torch
import torch.nn as nn

#Integer type tensor
test_int_input = torch.randint(size = [3,5],low=1,high=9)

# BatchNorm1D object
batchnorm1D = nn.BatchNorm1d(num_features=5)
test_output = batchnorm1D(test_int_input)

Error -

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-38-6c672cd731fa> in <module>
      1 batchnorm1D = nn.BatchNorm1d(num_features=5)
----> 2 test_output = batchnorm1D(test_input)

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py in forward(self, input)
    105             input, self.running_mean, self.running_var, self.weight, self.bias,
    106             self.training or not self.track_running_stats,
--> 107             exponential_average_factor, self.eps)
    108 
    109 

/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
   1668     return torch.batch_norm(
   1669         input, weight, bias, running_mean, running_var,
-> 1670         training, momentum, eps, torch.backends.cudnn.enabled
   1671     )
   1672 

RuntimeError: "batch_norm" not implemented for 'Long'  

However, if we try to apply the same on a different non-int tensor, then it works. Here is the example -

import torch
import torch.nn as nn

#Integer type tensor
#test_input = torch.randn(size = [3,5])

# BatchNorm1D object
batchnorm1D = nn.BatchNorm1d(num_features=5)
test_output = batchnorm1D(test_input)
test_output

Output -

tensor([[ 0.4311, -1.1987,  0.9059,  1.1424,  1.2174],
        [-1.3820,  1.2492, -1.3934,  0.1508,  0.0146],
        [ 0.9509, -0.0505,  0.4875, -1.2931, -1.2320]],
       grad_fn=<NativeBatchNormBackward>)

Upvotes: 0

Views: 989

Answers (1)

Ivan
Ivan

Reputation: 40638

Your input tensor should be a floating point:

>>> batchnorm1D(test_int_input.float())
tensor([[-5.9605e-08, -1.3887e+00, -9.8058e-01,  2.6726e-01,  1.4142e+00],
        [-1.2247e+00,  4.6291e-01,  1.3728e+00, -1.3363e+00, -7.0711e-01],
        [ 1.2247e+00,  9.2582e-01, -3.9223e-01,  1.0690e+00, -7.0711e-01]],
       grad_fn=<NativeBatchNormBackward0>)

Upvotes: 1

Related Questions