Reputation: 149
PyTorch makes little changes to my assigned values, which causes really different results in my neural network. E.g.:
a = [234678.5462495405945]
b = torch.tensor(a)
print(b.item())
The output is:
234678.546875
The little change PyTorch made to my variable a
caused an entirely different result in my neural network. My neural network is a very sensitive one. How can I prevent PyTorch from making little changes to assigned values?
Upvotes: 0
Views: 1829
Reputation: 8829
Your question is pretty broad; you haven't shown us your network. That means none of us can address the real issue. But the code sample you show has a more limited scope: why is PyTorch changing my floats?
PyTorch by default uses single-precision floating point (nowadays called binary32). Python by default uses double-precision floating point (nowadays called binary64). When you convert from a Python float to a PyTorch FloatTensor, you lose precision. (This is called rounding.)
If you want, you can specify the data type, but then your entire network will have to be converted to binary64.
Just for your example:
import torch
a = 234678.5462495405945
b = torch.tensor(a, dtype=torch.float64)
print(b.item())
# 234678.54624954058
If your network is that sensitive, you probably have bigger problems. You're likely vastly overfitted, or you're too focused on one training example. A lot of work on quantizing networks and showing performance curves as you use lower-precision numbers has been done.
Upvotes: 2