young03600
young03600

Reputation: 5

Compute information entropy with PyTorch

Problem:

how can I compute the information entropy by utilizing the current model.

As I try to implement method proposed by this paper. But I have no clue how to make it work.

Assuming there is a Matrix M, which is m x n.
There are some entries of this matrix M is missing.
The task of the model is to completing the matrix M, using the method similar to matrix factorization but with the architecture of autoencoder.

The following is the statements of that paper without modifying.

In this algorithm, we use the binomial distribution probability to measure the uncertainty of each unknown delay. Specifically, for each unknown delay, we utilize the current model to compute the probabilities of two potential delays. These two probabilities form a binomial distribution.

Thus, we set the probability of a delay value as P(x)=p, P(x_bar)=1-p, where p equals 1 indicates that we know the delay at x, and p equals 0 indicates that we do not know the delay at x_bar.

Meanwhile, we employ information entropy to measure the uncertainty of the distribution.
For a binomial distribution, the information entropy can be calculated as: H(p)=-plog(p)-(1-p)log(1-p)

Is it something like using sigmoid or softmax to transfer the output of model to probabilities then using the probabilities to get the information entropy?

Any help is appreciate. Thanks.

Upvotes: 0

Views: 26

Answers (0)

Related Questions