Reputation: 284
I am starting to build the model that looks like this:
model_sim <- glmer(Accuracy ~ x*y*z_scaled + (1 |Participant),
binomial(link = "logit"), data = Data)
And it failed to converge so I ran the allFit
function:
(model_sim <- allFit(model_sim, maxfun = 1e+05))
to see if there were actual reasonable reasons to be concerned, it converged with 5 out 6 optimizers, all with the same value, so I selected the one I always select - bobyqa with 1e+05 iterations but it failed to converge again. Could anyone explain why this happened? Shouldn't it converge based on the all fit results? What would you do in this situation?
$fixef
$llik
bobyqa Nelder_Mead nlminbwrap
-24286.15 -24286.15 -24286.15
nloptwrap.NLOPT_LN_NELDERMEAD nloptwrap.NLOPT_LN_BOBYQA
-24286.15 -24286.15
$theta
Participant.(Intercept)
bobyqa 0.6872716
Nelder_Mead 0.6872370
nlminbwrap 0.6872456
nloptwrap.NLOPT_LN_NELDERMEAD 0.6872732
nloptwrap.NLOPT_LN_BOBYQA 0.6872732
Upvotes: 0
Views: 395
Reputation: 226761
You might be misunderstanding the meaning/purpose of allFit()
(and maybe the convergence warnings themselves). allFit()
doesn't actually change anything/make anything converge "better" - it just tests whether a range of different optimization algorithms converge on the same solution (or sufficiently similar solutions).
Your output shows that all of the available optimizers are converging to the same coefficients, at least to several decimal places (hopefully a difference between a Probability
coefficient of 0.449 and 0.451 will not make a substantive difference to your conclusions), and that the difference in log-likelihoods is less than 0.005 log-likelihood units (which is small). So my conclusion in this case is that the fit is OK, and it doesn't actually matter which optimizer you use.
Upvotes: 1