OneAndOnly
OneAndOnly

Reputation: 1056

Change the precision of torch.sigmoid?

I want my sigmoid to never print a solid 1 or 0, but to actually print the exact value

i tried using

torch.set_printoptions(precision=20) 

but it didn't work. here's a sample output of the sigmoid function :

before sigmoid : tensor([[21.2955703735]])
after sigmoid : tensor([[1.]])

but i don't want it to print 1, i want it to print the exact number, how can i force this?

Upvotes: 2

Views: 1949

Answers (1)

jodag
jodag

Reputation: 22244

The difference between 1 and the exact value of sigmoid(21.2955703735) is on the order of 5e-10, which is significantly less than machine epsilon for float32 (which is about 1.19e-7). Therefore 1.0 is the best approximation that can be achieved with the default precision. You can cast your tensor to a float64 (AKA double precision) tensor to get a more precise estimate.

torch.set_printoptions(precision=20)
x = torch.tensor([21.2955703735])
result = torch.sigmoid(x.to(dtype=torch.float64))
print(result)

which results in

tensor([0.99999999943577644324], dtype=torch.float64)

Keep in mind that even with 64-bit floating point computation this is only accurate to about 6 digits past the last 9 (and will be even less precise for larger sigmoid inputs). A better way to represent numbers very close to one is to directly compute the difference between 1 and the value. In this case 1 - sigmoid(x) which is equivalent to 1 / (1 + exp(x)) or sigmoid(-x). For example,

x = torch.tensor([21.2955703735])
delta = torch.sigmoid(-x.to(dtype=torch.float64))
print(f'sigmoid({x.item()}) = 1 - {delta.item()}')

results in

sigmoid(21.295570373535156) = 1 - 5.642236648842976e-10

and is a more accurate representation of your desired result (though still not exact).

Upvotes: 4

Related Questions