Reputation: 21
My dataset is about penalty kicks and contains 106 rows and the features are :
I would like to perform a multinomial logistic regression on this data to have a model for the kick direction depending on the two others. I am taking as example the Aligator BUGS example : http://www.openbugs.net/Examples/Aligators.html
My BUGS file is the following one :
model
{
# PRIORS
alpha[1] <- 0; # zero contrast for baseline food
for (k in 2 : K) {
alpha[k] ~ dnorm(0, 0.00001) # vague priors
}
# Loop around Foot:
for (k in 1 : K){
beta[1, k] <- 0
} # corner-point contrast with first foot
for (i in 2 : I) {
beta[i, 1] <- 0 ; # zero contrast for baseline foot
for (k in 2 : K){
beta[i, k] ~ dnorm(0, 0.00001) # vague priors
}
}
# Loop around Time:
for (k in 1 : K){
gamma[1, k] <- 0 # corner-point contrast with first Time
}
for (j in 2 : J) {
gamma[j, 1] <- 0 ; # zero contrast for baseline Time
for ( k in 2 : K){
gamma[j, k] ~ dnorm(0, 0.00001) # vague priors
}
}
# LIKELIHOOD
for (i in 1 : I) { # loop around Foot
for (j in 1 : J) { # loop around Time
# Multinomial response
X[i,j,1 : K] ~ dmulti( p[i, j, 1 : K] , n[i, j] )
n[i, j] <- sum(X[i, j, ])
for (k in 1 : K) { # loop around Kick_Direction
p[i, j, k] <- phi[i, j, k] / sum(phi[i, j, ])
log(phi[i ,j, k]) <- alpha[k] + beta[i, k] + gamma[j, k]
}
}
}
}
I use rjags and have the following error occuring :
Error in jags.model("kick_dir.bug", data, inits) : RUNTIME ERROR:
Possible directed cycle involving some or all
of the following nodes:
X[1,1,1:3]
X[1,2,1:3]
X[2,1,1:3]
X[2,2,1:3]
n[1,1]
n[1,2]
n[2,1]
n[2,2]
What did I do wrong ?
Thanks in advance
Upvotes: 2
Views: 251
Reputation: 565
I had similar problem with dmulti
distribution because it seems natural to code in this way by design.
The problem is in these two lines:
X[i,j,1 : K] ~ dmulti( p[i, j, 1 : K] , n[i, j] )
n[i, j] <- sum(X[i, j, ])
JAGS knows that the same variables X
appear in LHS and RHS of the expression, which is prohibited in this package.
As one of possible workarounds, you could feed n[i, j]
as part of data.
There is no such issue in STAN, by the way, as those sums get calculated automatically.
Upvotes: 0