Reputation: 811
pymc
is great! It really opened up my world to MCMC, so thank you for coding it.
Currently I am using pymc
to estimate some parameters and the confidence intervals by fitting a function to observations. For most of the observation-sets the posterior distributions (pymc.Matplot.plot(MCMCrun)
) of the parameters are nicely shaped, Gaussian-like, and the best estimate and uncertainty of a certain parameter (parameter a
in this case) comes from :
param_estimate = MCMCrun.a.stats()['mean']
param_estimate = MCMCrun.a.stats()['standard deviation']
and confidence interval from
lower,upper = scipy.stats.mstats.mquantiles(MCMCrun.a.trace(), [0.025, 0.975])
However in some cases the posterior distributions looks like
As you can see A, should not be below zero, in my prior I set both A and B to Uniform, positive and to cover enough of the reasonable parameter space. My question is:
What is the correct approach in interpreting the posterior distribution for A?
Taking the mean of the trace will now yield a value that is not at the peak of the posterior distribution, and thus not really representative. Should I just continue running more iterations? Or is this the best estimate of A I will get, i.e. it's between 0 and ~7?
Upvotes: 2
Views: 1058
Reputation: 4203
The posterior distribution summarizes the posterior uncertainty in the parameter, conditional on the dataset you fit the model with, and of course, the model structure itself. From the posterior, you can extract a measure of central tendency (mean or median) and a posterior credible interval, which can be obtained from the appropriate quantiles of the posterior.
Upvotes: 2