Anshul Choudhary
Anshul Choudhary

Reputation: 305

Pytorch copying inexact value of numpy floating point number

I'm converting a floating point number (or numpy array) to Pytorch tensor and it seems to be copying the inexact value to the tensor. The error comes in the 8th significant digit and afterwards. This is significant (no-pun intended) for my work as I deal with chaotic dynamics which is very sensitive towards the slight change in the initial conditions.

I'm already using torch.set_printoptions(precision=16) to print 16 significant digits.

np_x = state
print(np_x)
x = torch.tensor(np_x,requires_grad=True,dtype=torch.float32)
print(x.data[0])

and the output is :

0.7575408585008059
tensor(0.7575408816337585)

It would be helpful to know what is going wrong or how it could be resolved ?

Upvotes: 1

Views: 455

Answers (1)

zihaozhihao
zihaozhihao

Reputation: 4475

Because you're using float32 dtype. If you convert these two numbers to binary, you will find they are actually the same. Strictly speaking, the most accurate representations of those two numbers in float32 format are the same.

0.7575408585008059
Most accurate representation = 7.57540881633758544921875E-1

0.7575408816337585
Most accurate representation = 7.57540881633758544921875E-1

Binary: 00111111 01000001 11101110 00110011

Upvotes: 1

Related Questions