Reputation: 96304
Say I have two pdfs, e.g.:
from scipy import stats
pdf_y = stats.beta(5, 9).pdf
pdf_x = stats.beta(9, 5).pdf
I would like to compute their KL divergence. Before I reinvent the wheel, are there any builtins in the PyData eco-system for doing this?
Upvotes: 5
Views: 5064
Reputation: 520
In other answers there are empiric KL-divergence calculations, while we can have a closed-form solution for Beta distributions as in question.
I was not able to find snippet with KL-div for beta distibutions on the web. In the end I coded it myself.
Sharing it as it might be useful for others:
import numpy as np
from scipy import special
def kl(a1, b1, a2, b2):
"""https://en.wikipedia.org/wiki/Beta_distribution"""
B = special.beta
DG = special.digamma
return np.log(B(a2, b2) / B(a1, b1)) + (a1 - a2) * DG(a1) + (b1 - b2) * DG(b1) + (
a2 - a1 + b2 - b1) * DG(a1 + b1)
Upvotes: 1
Reputation: 1216
Since KL-divergence is defined as an integral for the continuous case I'm afraid you will have to do a Monte Carlo integration over the (hyper) space for the two distributions.
In your case this would mean uniformly drawing random numbers in the interval [0,1] and calculating the two PDF's values, to be used in the integral calculation.
Upvotes: 2
Reputation: 8283
KL divergence is available in scipy.stats.entropy. From the docstring
stats.entropy(pk, qk=None, base=None)
Calculate the entropy of a distribution for given probability values.
If only probabilities `pk` are given, the entropy is calculated as
``S = -sum(pk * log(pk), axis=0)``.
If `qk` is not None, then compute a relative entropy (also known as
Kullback-Leibler divergence or Kullback-Leibler distance)
``S = sum(pk * log(pk / qk), axis=0)``.
Upvotes: 3
Reputation: 931
It looks like the package nimfa
has what you're looking for. http://nimfa.biolab.si
V = np.matrix([[1,2,3],[4,5,6],[6,7,8]])
fctr = nimfa.mf(V, method = "lsnmf", max_iter = 10, rank = 3)
fctr_res = nimfa.mf_run(fctr)
# Print the loss function according to Kullback-Leibler divergence. By default Euclidean metric is used.
print "Distance Kullback-Leibler: %5.3e" % fctr_res.distance(metric = "kl")
This isn't exactly what you're looking for, since it appears to only take one input, but it may be a place to start.
Additionally, this link could be useful. Seems to have some code (not with numpy) to compute the same thing. https://code.google.com/p/tackbp2011/source/browse/TAC-KBP2011/src/python-utils/LDA/kullback-leibler-divergence.py?r=100
Upvotes: 1