Reputation: 8855
I am attempting to learn the mechanics of Gibbs sampling. I have 2 variables for which I am trying to conduct inference from. This example assumes only Gaussian distributions. My code in R
looks like the following.
library(condMVNorm)
rm(list=ls())
means <- c(0, 25)
cov <- matrix(c(1.09, 1.95, 1.95, 4.52), 2, 2)
k <- 10
initSample <- c(0, 0)
traceSamples <- matrix(, k, 2)
for (i in 1:k) {
X <- initSample[1]
c1 <- rcmvnorm(n=1, mean=means, sigma=cov, dep=2, given=1, X=X)
X <- c1
c2 <- rcmvnorm(n=1, mean=means, sigma=cov, dep=1, given=2, X=X)
currentSample <- c(c1, c2)
traceSamples[i, ] <- currentSample
initSample <- currentSample
}
colMeans(traceSamples)
What I get as the output is the following.
[1] 2220.7619 947.3168
I would have expected that the first variable would be pretty close to 25 and the second one to 0.
I do not know if my understanding is wrong with Gibbs sampling because the literature invariably says you sample from the conditional distribution p(X1=x1|X2=x2)
. To me, p(X1=x1|X2=x2)
is the density estimation of X1=x1
given X2=x2
, and one would map that to dcmvnorm
and not rcmvnorm
.
Printing the traceSamples
matrix, I get the following.
[,1] [,2] [1,] 22.0574 -0.7827272 [2,] 63.6865 16.3375931 [3,] 138.7078 49.2994688 [4,] 272.0850 107.3952335 [5,] 510.2272 208.3522406 [6,] 940.7504 395.4438929 [7,] 1708.2603 725.3048137 [8,] 3080.5096 1317.7650679 [9,] 5538.0734 2378.8674730 [10,] 9933.2615 4275.1848015
The values seem to be increasing (so this suggest something is wrong with my R code). Furthermore, I also do a very simple sampling without the for loop.
means <- c(0, 25)
cov <- matrix(c(1.09, 1.95, 1.95, 4.52), 2, 2)
x1 <- rcmvnorm(n=1, mean = means, sigma = cov, dep=2, given=1, X=c(0))
x2 <- rcmvnorm(n=1, mean = means, sigma = cov, dep=1, given=2, X=c(x1))
x1 <- rcmvnorm(n=1, mean = means, sigma = cov, dep=2, given=1, X=c(x2))
x2 <- rcmvnorm(n=1, mean = means, sigma = cov, dep=1, given=2, X=c(x1))
My x1 and x2 values for each of these are as follows.
23.40496 -0.01044726 22.67643 -0.6836546
Any ideas on what I am doing wrong?
Note, I was able to get better expected results with the following code.
means <- c(0, 25)
cov <- matrix(c(1.09, 1.95, 1.95, 4.52), 2, 2)
k <- 9000
x1 <- 0
x2 <- 0
traceSamples <- matrix(, k, 2)
for (i in 1:k) {
x1 <- rcmvnorm(n=1, mean=means, sigma=cov, dep=2, given=1, X=x2)
x2 <- rcmvnorm(n=1, mean=means, sigma=cov, dep=1, given=2, X=x1)
traceSamples[i, ] <- c(x1, x2)
}
colMeans(traceSamples)
Could someone tell me what I'm doing wrong with reusing and re-assigning initSample
?
Upvotes: 1
Views: 532
Reputation: 2289
Here I solved the problem of why Gibbs, was providing erroneous values in the simulation, but I think it is getting complicated when doing the code in that way, I think that some lines could be removed to structure the code in a more efficient way, which is also faster. However, notice the changes I made in x <-initSample
and X = X[1]
and X = X[2]
.
library(condMVNorm)
rm(list=ls())
means <- c(0, 25)
cov <- matrix(c(1.09, 1.95, 1.95, 4.52), 2, 2)
k <- 9000
initSample <- c(0,0)
traceSamples <- matrix(, k, 2)
for (i in 1:k){
X <- initSample
c1 <- rcmvnorm(n=1, mean=means, sigma=cov, dep=2, given=1, X=X[2])
X <- c1
c2 <- rcmvnorm(n=1, mean=means, sigma=cov, dep=1, given=2, X=X[1])
currentSample <- c(c1, c2)
traceSamples[i, ] <- currentSample
initSample <- currentSample
}
> head(traceSamples,10)
[,1] [,2]
[1,] 23.8233821520619 -0.9169596237697860
[2,] 22.8293033255339 -1.6287517329781345
[3,] 21.3923155517845 -1.9104909272586084
[4,] 20.5331401021848 -2.3320921649401360
[5,] 21.4287399563041 -1.1376683051591154
[6,] 23.4335659872032 -0.4379604108831421
[7,] 25.4074041761893 -0.0613743089436460
[8,] 24.2471298284230 0.0764901351102767
[9,] 24.7450703427834 -1.2443499508519478
[10,] 24.2193799579308 -0.4995919725966815
> cov.wt(traceSamples)
$cov
[,1] [,2]
[1,] 4.54864368811939 1.96444834328156
[2,] 1.96444834328156 1.09723665614730
$center
[1] 24.9626145462517535 -0.0163323659130855
$n.obs
[1] 9000
Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm. Therefore you should check the convergence of the chain. The coda package provides some very useful tests.
library(coda)
MC <- mcmc(traceSamples)
plot(MC)
heidel.diag(MC)
Stationarity start p-value
test iteration
var1 passed 1 0.231
var2 passed 1 0.193
Halfwidth Mean Halfwidth
test
var1 passed 24.9626 0.1228
var2 failed -0.0163 0.0598
Where accept the null hypothesis that the Markov chain is from a stationary distribution.
Upvotes: 2