hildensia
hildensia

Reputation: 1740

What is the most pythonic way to conditionally compute?

I'm implementing Bayesian Changepoint Detection in Python/NumPy (if you are interested have a look at the paper). I need to compute likelihoods for data in ranges [a, b], where a and b can have all values from 1 to n. However I can prune the computation at some points, so that I don't have to compute every likelihood. On the other hand some likelihoods are used more than once, so that I can save time by saving the values in a matrix P[a, b]. Right now I check whether the value is already computed, whenever I use it, but I find that a bit of a hassle. It looks like this:

# ...
P = np.ones((n, n)) * np.inf  # a likelihood can't get inf, so I use it 
                            # as pseudo value

for a in range(n):
    for b in range(a, n):

        # The following two lines get annoying and error prone if you 
        # use P more than once

        if P[a, b] == np.inf:  
            P[a, b] = likelihood(data, a, b)

        Q[a] += P[a, b] * g[a] * Q[a - 1]  # some computation using P[a, b]
        # ...

I wonder, whether there is a more intuitive and pythonic way to achieve this, without having the if ... statement before every use of a P[a, b]. Something like an automagical function call if some condition is not met. I could of course make the likelihood function aware of the fact that it could save values, but then it needs some kind of state (e.g. becomes an object). I want to avoid that.

The likelihood function

Since it was asked for in a comment, I add the likelihood function. It actually computes the conjugate prior and then the likelihood. And all in log representation... So it is quite complicated.

from scipy.special import gammaln
def gaussian_obs_log_likelihood(data, t, s):
    n = s - t
    mean = data[t:s].sum() / n

    muT = (n * mean) / (1 + n)
    nuT = 1 + n
    alphaT = 1 + n / 2
    betaT = 1 + 0.5 * ((data[t:s] - mean) ** 2).sum() + ((n)/(1 + n)) * (mean**2 / 2)
    scale = (betaT*(nuT + 1))/(alphaT * nuT)

    # splitting the PDF of the student distribution up is /much/ faster. (~ factor 20)
    prob = 1
    for yi in data[t:s]:
        prob += np.log(1 + (yi - muT)**2/(nuT * scale)) 

    lgA = gammaln((nuT + 1) / 2) - np.log(np.sqrt(np.pi * nuT * scale)) - gammaln(nuT/2)

    return n * lgA - (nuT + 1)/2 * prob

Although I work with Python 2.7, both answers for 2.7 and 3.x are appreciated.

Upvotes: 4

Views: 274

Answers (1)

Aaron Digulla
Aaron Digulla

Reputation: 328724

I would use a sibling of defaultdict for this (you can't use defaultdict directly since it won't tell you the key that is missing):

class Cache(object):
    def __init__(self):
        self.cache = {}

    def get(self, a, b):
        key = (a,b)

        result = self.cache.get(key, None)
        if result is None:
            result = likelihood(data, a, b)
            self.cache[key] = result

        return result

Another approach would be using a cache decorator on likelihood as described here.

Upvotes: 4

Related Questions