Reputation: 143
I am trying to implement the approach used by the paper 'On Modeling Change Points in Non-Homogeneous Poisson Processes' by F. Ruggeri and S. Sivaganesan. In their paper they define the full conditionals
The paper states that the beta values can be determined using Gibbs sampling. n is the number of datapoints, where the datapoints are the t_i values. t_0 = 0 and t_n+1 is a value slightly larger than t_n. I have chosen it to be the ceiling of t_n + 1. Other constants have (initial) values as:
alpha, delta, rho, tau, mu, phi = 0.01, 0.01, 0.01, 0.01, 0, 0
Note that the beta's are proportional to and not equal to, which implies that I need to determine a normalisation constant. This is where the challenge lies for me.
I attempted the following implementation in Python.
def M_sample(alpha, delta, n, t, beta):
shape = alpha + n
first_val = np.power(t[0], beta[0]) - np.power(0, beta[0])
last_val = np.power(np.ceil(t[-1]), beta[-1]) - np.power(t[-1], beta[-1])
scale = 1/(delta + sum(np.power(t[i], beta[i]) - np.power(t[i-1], beta[i]) for i in range(1, n-1)) + first_val + last_val)
return gamma.rvs(shape, scale=scale)
def sample_sigma2(rho, tau, n, beta, phi):
shape = rho + n/2
scale = tau + 0.5*sum(np.power(np.log(beta[i]) - phi, 2) for i in range(n+1))
#print("scale",scale)
return invgamma.rvs(a=shape, scale=scale)
def sample_phi(mu, tau, n, beta, sigma2):
tau2 = np.power(tau, 2)
mu1 = (mu*sigma2 + tau2*sum(np.log(beta[i]) for i in range(n+1)))/(sigma2 + (n+1)*tau2)
denom = (tau2*(n+1) + sigma2)
tau1_sq = (tau2*sigma2)/denom
return norm.rvs(mu1, np.sqrt(tau1_sq))
def unnormalized_posterior_beta_i(beta, M, t, phi, sigma2, i):
return np.power(t[i], beta) * np.exp(-M*(np.power(t[i], beta) - np.power(t[i-1], beta)) - (np.power(np.log(beta)- phi, 2)/(2*sigma2)))
def unnormalized_posterior_beta_0(beta, M, t, phi, sigma2):
return np.power(t[0], beta) * np.exp(-M*(np.power(t[0], beta) - np.power(0, beta)) - (np.power(np.log(beta)- phi, 2)/(2*sigma2)))
def unnormalized_posterior_beta_n(beta, M, t, phi, sigma2):
y = np.ceil(t[-1]+1)
return (beta**(-1)) * np.exp(-M*(np.power(y, beta) - np.power(t[-1], beta)) - (np.power(np.log(beta)- phi, 2)/(2*sigma2)))
def sample_beta(n, M, t, phi, sigma2, beta):
for i in range(1, n):
integral_i, _ = quad(unnormalized_posterior_beta_i, 0, 10, args=(M, t, phi, sigma2, i))
beta_i = unnormalized_posterior_beta_i(beta[i], M, t, phi, sigma2, i)
beta[i] = beta_i/integral_i
integral_0, _ = quad(unnormalized_posterior_beta_0, 0, 10, args=(M, t, phi, sigma2))
beta_0 = unnormalized_posterior_beta_0(beta[0], M, t, phi, sigma2)
beta[0] = beta_0/integral_0
integral_n, _ = quad(unnormalized_posterior_beta_n, 0, 10, args=(M, t, phi, sigma2))
beta_n = unnormalized_posterior_beta_n(beta[-1], M, t, phi, sigma2)
print(beta_n, integral_n)
beta[-1] = beta_n/integral_n
return beta
def gibbs_sampler(n, t, n_samples=5):
alpha, delta, rho, tau, mu, phi = 0.01, 0.01, 0.01, 0.01, 0, 0
M_samples = []
sigma2_samples = []
phi_samples = []
beta_samples = []
beta = np.ones(n+1)
for _ in range(n_samples):
M = M_sample(alpha, delta, n, t, beta)
sigma2 = sample_sigma2(rho, tau, n, beta, phi)
phi = sample_phi(mu, tau, n, beta, sigma2)
beta = sample_beta(n, M, t, phi, sigma2, beta)
print(beta)
M_samples.append(M)
sigma2_samples.append(sigma2)
phi_samples.append(phi)
beta_samples.append(beta)
return M_samples, sigma2_samples, phi_samples, beta_samples
The main issue that occurs is that the beta values become too large, especially beta[-1]
. With the integration lines that I do I should be computing the normalising constant which should result in reasonable beta values. My assumption therefore is that I am making a mistake with the normalisation constant. Where am I making my mistake with implementing the algorithm?
Upvotes: 0
Views: 71