Reputation: 1238
The difference between these two functions that has been described in this pytorch post: What is the difference between log_softmax and softmax?
is: exp(x_i) / exp(x).sum()
and log softmax is: log(exp(x_i) / exp(x).sum())
.
But for the Pytorch code below why am I getting different output:
>>> it = autograd.Variable(torch.FloatTensor([0.6229,0.3771]))
>>> op = autograd.Variable(torch.LongTensor([0]))
>>> m = nn.Softmax()
>>> log = nn.LogSoftmax()
>>> m(it)
Variable containing:
`0.5611 0.4389`
[torch.FloatTensor of size 1x2]
>>>log(it)
Variable containing:
-0.5778 -0.8236
[torch.FloatTensor of size 1x2]
However, the value log(0.5611) is -0.25095973129 and log(0.4389) is -0.35763441915
Why is there such discrepancy?
Upvotes: 5
Views: 6533
Reputation: 46331
Not just by default but always torch.log
is natural log.
While torch.log10
is base 10 log.
Upvotes: 2
Reputation: 1032
By default, torch.log
provides the natural logarithm of the input, so the output of PyTorch is correct:
ln([0.5611,0.4389])=[-0.5778,-0.8236]
Your last results are obtained using the logarithm with base 10.
Upvotes: 8