hello m
hello m

Reputation: 21

Different cross entropy results from NumPy and PyTorch

My prediction is y_hat = [ 0.57,0.05,0.14,0.10,0.14] and target is target =[ 1, 0, 0, 0, 0 ].

I need to calculate Cross Entropy loss by NumPy and Pytorch loss function.

Using NumPy my formula is -np.sum(target*np.log(y_hat)), and I got 0.5621189181535413

However, using Pytorch:

loss = nn.CrossEntropyLoss()
output = torch.FloatTensor([0.57,0.05,0.14,0.10,0.14])
label = torch.FloatTensor([1,0,0,0,0])
loss_value = loss(output, label)
print(loss_value)

Gives tensor(1.2586), which is different.

Upvotes: 2

Views: 974

Answers (1)

Matt Hall
Matt Hall

Reputation: 8112

You need to apply the softmax function to your y_hat vector before computing cross-entropy loss. For example, you can use scipy.special.softmax().

>>> from scipy.special import softmax
>>> import numpy as np
>>> y_hat = [0.57, 0.05, 0.14, 0.10, 0.14]
>>> target =[1, 0, 0, 0, 0]
>>> y_hat = softmax(y_hat)
>>> -np.sum(target * np.log(y_hat))
1.2586146726011722

Which agrees with the result from Pytorch.

Upvotes: 1

Related Questions