Mike
Mike

Reputation: 779

PyTorch is giving me a different value for a scalar

When I create a tensor from float using PyTorch, then cast it back to a float, it produces a different result. Why is this, and how can I fix it to return the same value?

num = 0.9
float(torch.tensor(num))

Output:

0.8999999761581421

Upvotes: 0

Views: 313

Answers (1)

Berriel
Berriel

Reputation: 13601

This is a floating-point "issue" and you read more about how Python 3 handles those here.

Essentially, not even num is actually storing 0.9. Anyway, the print issue in your case comes from the fact that num is actually double-precision and torch.tensor uses single-precision by default. If you try:

num = 0.9
float(torch.tensor(num, dtype=torch.float64))

you'll get 0.9.

Upvotes: 2

Related Questions