Reputation: 95
I'm currently building a neural network in PyTorch that accepts tensors of integers and outputs tensors of integers. There is only a small number of positive integers that are "allowed" (like 0, 1, 2, 3, and 4) as elements of the input and output tensors.
Neural networks usually work in continuous space. For example, the nonlinear activation functions between layers are continuous and map integers to real numbers (including non-integers).
Is it best to use unsigned integers like torch.uint8
internally for the weights and biases of the network plus some custom activation function that maps ints to ints?
Or should I use high precision floats like torch.float32
and then round in the end, by binning real numbers to the nearest integer? I think this second strategy is the way to go, but maybe I'm missing out on something that would work nicely.
Upvotes: 1
Views: 3117
Reputation: 1240
Without knowing too much about your application I would go for torch.float32
with rounding. The main reason being that if you use a GPU to compute your neural network, it will require wights and data to be in float32
datatype. If you are not going to train your neural network and you want to run on CPU, then datatypes like torch.uint8
may help you as you can achieve more instructions per time interval (i.e. your application should run faster). If that doesn't leave you with a clue, then please be more specific about your application.
Upvotes: 3