Reputation: 15
I was working with R version 3.6.3 and recently updated to version 4.0.3. I have given an example of the model I am working on.
Model0 <- lmer(accuracy~Temperature+Relative_Humidity+scaledweek+(1 | pxids),data=ptrails.df)
Model1 <- lmer(accuracy~CO+Temperature+Relative_Humidity+scaledweek+(1 | pxids),data=ptrails.df)
Model2 <- lmer(accuracy~pm10+Temperature+Relative_Humidity+scaledweek+(1 | pxids),data=ptrails.df)
Model3 <- lmer(accuracy~NO+Temperature+Relative_Humidity+scaledweek+(1 | pxids),data=ptrails.df)
anova(Model0,Model1,Model2, Model3)
The idea is for each model to compare with the base model (Model0
) to determine which variable has a significant effect.
Example of output:
I do not get the p-value for Model2 and Model3. This wasn't the case in the previous version. I tried to compare model0 with model1 and then model0 with model2 etc. such comparisons give me the p-value however, I have very large data, I would need to do it together.
Upvotes: 0
Views: 231
Reputation: 226162
This is a little bit of a guess because you haven't provided a reproducible example, but this probably has to do with versions of lme4
(not R) before and after version 1.1-24: the NEWS file for lme4 reports that
anova() now returns a p-value of NA if the df difference between two models is 0 (implying they are equivalent models)
"df difference" means the difference in the numbers of parameters estimated. There are two ways that different models could have equal numbers of df:
f
and g
are factors then including f*g
and f:g
give different parameterizations but an equivalent model fit. In this case the change in deviance ("Chisq") also be zero. This appears to be what's happening in models 0 vs 2 in your case (although it puzzles me that npar
is different: hard to understand without a reproducible example. Perhaps you added a perfectly correlated predictor, or your model fit was singular?) In this case it could be argued that the p-value is 1, but NA
is also reasonable (see this discussion)A
, the second model includes numeric covariate B
. This is not a case that occurred to the package developers ... In this case the likelihood ratio test is inappropriate, so it doesn't make sense to return a p-value at all.If you want to compare non-nested models you either need something like Vuong's test, or just compare the AIC values. (I would argue that bbmle::AICtab()
gives a more useful format for comparison ...)
Upvotes: 2