Reputation: 147
My model has three parameters, say theta_1, theta_2 and nu.
I want to sample theta_1, theta_2 from the posterior with nu marginalized out (which can be done analytically), i.e. from p(theta_1, theta_2 | D) instead of p(theta_1, theta_2, nu | D) where D is the data. After that, I want to resample nu based on the new values of theta_1 and theta_2. So one sampling scan would consist of the steps
In other words, a collapsed Gibbs sampler.
How would I go about that with PyMC3? I reckon I should implement an individual step function, but I'm not sure how to construct the likelihood here. How do I get access to the model specification when implementing a step function in PyMC3?
Upvotes: 0
Views: 529
Reputation: 4203
The notions of step methods and likelihoods are somewhat conflated in the question, but I see what you are driving at. Step methods are typically independent of the likelihood, which is passed to the step method as an argument. For example check out the slice sampler step method in PyMC 3. Likelihoods are stochastic objects that return log-likelihood values conditional on the values of their parents in the directed acyclic graph.
If you are doing Gibbs sampling, you are not typically concerned with evaluating likelihoods because you are iteratively sampling directly from the conditionals of the model parameters. We do not currently have Gibbs in PyMC 3, and there is some rudimentary Gibbs support in PyMC 2. Its a little troublesome to implement generally because it involves recognizing conjugate associations in the model. Moreover, in PyMC 3 you have access to gradient-based samplers (Hamiltonian), which are much more efficient than Gibbs, so there are a few reasons you may not want to implement Gibbs.
That said, PyMC offers a tremendous amount of flexibility for implementing custom step methods and likeihoods. So long as the step (astep
) function returns a new point, you can pretty much do what you like otherwise. There's no guarantee that it will be a good sampler,
Upvotes: 1