XapaJIaMnu
XapaJIaMnu

Reputation: 1500

pytorch bfloat16 epsilon subtraction results in the same result

I am trying to sweep some parameters in a very narrow range of values and I use bfloat16 epsilong as the step size to avoid issues such a-b==a for very small b, however I chanced upon this situation:

import torch
test = torch.Tensor([2.8438]).to(torch.bfloat16)
delta=torch.finfo(torch.bfloat16).eps
test - delta == test
Out[1]: tensor([True])

I thought the purpose of the epsilon is to precisely avoid this kind of issues. What am I doing wrong? How can I get the next smallest number? At the moment I am using:

if test - delta == test:
    test = test - delta*2
else:
    test = test - delta

And this works, but I am concerned my understanding of the epsilon is not correct..?

Upvotes: 2

Views: 404

Answers (1)

Renat
Renat

Reputation: 8992

From Pytorch docs:

eps | ... | The smallest representable number such that 1.0 + eps != 1.0.

As in your example 2.8438 > 2 * 1.0, it has exponent greater than the exponent of 1.0, so 2.8438 + eps != 2.8438 doesn't hold.

Update. For an arbitrary number, we probably can shift eps by the exponent of that number as:

import torch
import math

def anyEps(n):
    if n == 0:
        return torch.finfo(torch.bfloat16).eps
    pow2 = 2**math.floor(math.log(abs(n),2))
    return torch.finfo(torch.bfloat16).eps * pow2

test = torch.Tensor([2.8438]).to(torch.bfloat16)
delta = anyEps(test)
test - delta == test
# tensor([False])

Upvotes: 1

Related Questions