Reputation: 97
I am using the maximum likelihood method to estimate a set of parameters. Now I am going to use mle
function from the stats4
package in R to make a profile likelihood for one of the parameters. To do that, I need to fix one of the parameters when I call the mle
function. Here is the code:
fr <- function(x1, x2, x3) {
100 * (x2 - x1 * x1)^2 + (1 - x1)^2 + x3
}
out <- mle(fr,start = list(x1=1, x2=2, x3=3), method="Nelder-Mead",
control=list(trace=4), fixed = list(x2=1))
and I get this error:
Error in solve.default(oout$hessian) : Lapack routine dgesv: system is exactly singular: U[1,1] = 0
If I do not use the fixed
option, then I do not have this error, but the result is not a profile likelihood. Can you please let me know how can I solve this issue?
Upvotes: 2
Views: 6248
Reputation: 226732
tl;dr I'm not sure your objective function makes sense, my guess is that you have a typo. (Furthermore, if your objective function is working with mle
, you don't need to set fixed
explicitly: the profile
method will automatically compute likelihood profiles for you ...)
Let's start with the full model, and let's use optim()
rather than stats4::mle()
(I know you want to get back to mle
so that you can do likelihood profiling, but it's a little bit easier to debug optim()
problems since there is one less layer of code to dig through.)
Since optim()
wants an objective function that takes a vector rather than a list of arguments, write a wrapper (we could also use do.call(fr, as.list(p))
):
fr0 <- function(p) {
fr(x1=p[1], x2=p[2], x3=p[3])
}
opt1 <- optim(fn=fr0, par=c(1,2,3), method="Nelder-Mead")
Results:
$par
[1] 28.51486 812.09978 -7095.39630
$value
[1] -6238.881
$counts
function gradient
502 NA
$convergence
[1] 1
Note that the value of x[3]
is strongly negative, as is the objective function value, and the convergence code is non-zero: in particular means (from ?optim
):
‘1’ indicates that the iteration limit ‘maxit’ had been reached.
If we set control=list(maxit=2000)
and try again x3
and the objective function get even smaller and the convergence code is still 1!
Then we look more carefully at the objective function and notice that it goes to -Inf
as x3
→ Inf
, so we'll never reach the answer. (Presumably at some point we'll get to a floating-point problem, but 10 million iterations only gets us as a far as -1e17
...)
If I change x3
to x3^2
in your function everything seems to work OK ... maybe that's what you intended ... ???
Upvotes: 3