Reputation: 177
I am trying to use R function performance::r2()
to calculate marginal and conditional R2 of a fitted model of class glmmTMB
. To ask this question, I am making use of existing code provided in the document 'Getting started with the glmmTMB package' (Bolker 2024), available at link, and the Owls
dataset included in package glmmTMB
. All code below is taken from that document verbatim, except the use of function performance::r2()
in the last line. My question refers to the warning message (shown below) thrown by function performance::r2()
when passed the fitted model of class glmmTMB
.
library(glmmTMB)
library(performance)
Owls <- transform(Owls,
Nest=reorder(Nest,NegPerChick),
NCalls=SiblingNegotiation,
FT=FoodTreatment)
fit_zipoisson <- glmmTMB(NCalls~(FT+ArrivalTime)*SexParent+
offset(log(BroodSize))+(1|Nest),
data=Owls,
ziformula=~1,
family=poisson)
performance::r2(fit_zipoisson)
Running lines above yields the following output (copied directly from console):
> performance::r2(fit_zipoisson)
# R2 for Mixed Models
Conditional R2: 0.086
Marginal R2: 0.019
Warning message:
mu of 2.1 is too close to zero, estimate of random effect variances may be unreliable.
I am trying to determine if this warning message inidicates a simple incompatibility between
packages glmmTMB
and performance
that does not indicate a real issue with model fit, or if it indicates that there is a real or potential issue with model fit. I have seen this warning consistently when fitting a simpler zero-inflated mixed effects model to another dataset. I am using the Owls
dataset from package glmmTMB
for troubleshooting only.
I do see a conversation regarding this from 2022 (link), but am unsure if developments in either package glmmTMB
or performance
have made this discussion obsolete.
Upvotes: 1
Views: 176
Reputation: 226751
The code that implements this warning
throws the warning if mu <- exp(null.fixef)
(i.e., the predicted value of the response for the baseline case [intercept] of the model) is less than 6.
The comment says that this machinery is taken from Nakagawa et al. 2017 (reference below). In looking at this paper and its supplementary materials, there is lots of discussion of the different approximations for estimating the observation-level variance (delta method, lognormal, trigamma), and indications that the different approximations are better/closer together when mu
is large. For example:
(from the supplementary material, appendices S1-S5 and S7). I believe that in this figure the solid line
ln(1+1/lambda)
represents the log-normal assumption; the trigamma assumption psi_1(lambda)
is recommended by the authors, but may be less generally applicable. One can see that the results get very similar as lambda (==mu) gets larger, but I have no idea how the threshold mu ≥ 6
was chosen.
There's a lot of code archaeology to be done here. Some of this code is attributed to me (!), but I think I got it from (some version of) the MuMIn
package (r2glmm
) or piecewiseSEM
. This test occurs in code I have lying around, in a file dated 2016-08-23, commented as follows:
##' Cleaned-up/adapted version of Jon Lefcheck's code from SEMfit;
##' also incorporates some stuff from MuMIn::rsquaredGLMM.
##' Computes Nakagawa/Schielzeth/Johnson analogue of R^2 for
##' GLMMs. Should work for [g]lmer(.nb), glmmTMB models ...
So far the closest I've been able to get to an actual source for this is in line 150 of R/r.squaredGLMM.R
in version 1.15.6 of MuMIn
(2016-01-07) which includes a test with the same threshold for the Poisson model. (MuMIn
doesn't have a public version control system, so the archaeology is a little bit painful ...)
I think this probably warrants more discussion on the insight issues list ...
Nakagawa, Shinichi, Paul C. D. Johnson, and Holger Schielzeth. 2017. “The Coefficient of Determination R2 and Intra-Class Correlation Coefficient from Generalized Linear Mixed-Effects Models Revisited and Expanded.” Journal of The Royal Society Interface 14 (134): 20170213. https://doi.org/10.1098/rsif.2017.0213.
Upvotes: 2