sacvf
sacvf

Reputation: 2533

Why "optimx" in R give different results when changing the starting values?

I need to to use "optimx" but I found that the output changes when I change the starting values.Is this normal or there is error in my script?:

  dat1 <- array(1:60, c(3,5,4));dat2 <- array(1:60, c(3,5,4));dat3 <- array(1:60, c(3,5,4))
  #reorder dimensions 
  dat1 <- aperm(dat1, c(3,1,2)); dat2 <- aperm(dat2, c(3,1,2));dat3 <- aperm(dat3, c(3,1,2))
   #make array a matrix 
  dat1a <- dat1;dim(dat1a) <- c(dim(dat1)[1],prod(dim(dat1)[2:3])) 
  dat2a <- dat2;dim(dat2a) <- c(dim(dat2)[1],prod(dim(dat2)[2:3])) 
  dat3a <- dat3;dim(dat3a) <- c(dim(dat3)[1],prod(dim(dat3)[2:3])) 

case one:

   fun <- function(x1, x2, y) {
    keep <- !(is.na(x1) | is.na(x2) | is.na(y))
     if (sum(keep) > 0) { 
        best=function(p,x1, x2, y){
   sum((y [keep]-(((p[1]*x1[keep]+p[2]*x2[keep]+p[3])^p[4])+p[5]))^2)}
      res <- optimx(c(0.5,3,4,0.1,1.8),best,x1=x1,x2=x2,y=y)
     res  <- coef(res)[1:5] 
      } else {    res <- c(NA, NA, NA,NA,NA)  }
         return(res)}

     res2 <- mapply(fun, x1=as.data.frame(dat1a), x2=as.data.frame(dat2a), y=as.data.frame(dat3a))

res2:

        V1         V2         V3          V4         V5         V6         V7
[1,] -6.4508094  4.3192551 -4.4118228  0.96978160 -5.3236129  1.7360552  6.7636543
[2,] -0.7073374 -0.7404623 -0.7490429 -0.56937504 -0.6729419 -0.7373379 -0.7387721

case two:

   # same function but I changed the starting values
    fun=function(x1, x2, y) {
       keep <- !(is.na(x1) | is.na(x2) | is.na(y))
       if (sum(keep) > 0) { 
    best=function(p,x1, x2, y){
        sum((y [keep]-(((p[1]*x1[keep]+p[2]*x2[keep]+p[3])^p[4])+p[5]))^2)}
    res <- optimx(c(1,1,1,1,1),best,x1=x1,x2=x2,y=y)
    res  <- coef(res)[1:5] 
} else {   res <- c(NA, NA, NA,NA,NA)    }
return(res)}
 res3 <- mapply(fun, x1=as.data.frame(dat1a), x2=as.data.frame(dat2a), y=as.data.frame(dat3a))

res3:

 V1         V2         V3         V4        V5        V6         V7        V8
 [1,]  0.6018246  0.1993663 -0.2520812 -0.1702029 -1.176793 -6.513526 -0.1749120 -1.329519
 [2,]  7.6637890  3.4957285  3.0466838  2.2510481  1.601970  1.245830  1.0175985  0.852469

Upvotes: 2

Views: 1696

Answers (1)

LyzandeR
LyzandeR

Reputation: 37879

There is no error in your script. This is the way the optimiser works. In your case you are using optimx with the default parameters (i.e. method is not specified as well as upper and lower arguments) which means that internally optimx will use the base R optim function with the default Nelder-Mead method.

I quote from Wikipedia (which might not be the best source but it explains it correctly):

The Nelder–Mead method or downhill simplex method or amoeba method is a commonly used nonlinear optimization technique, which is a well-defined numerical method for problems for which derivatives may not be known. However, the Nelder–Mead technique is a heuristic search method that can converge to non-stationary points[1] on problems that can be solved by alternative methods.[2]

The keyword in the above quote, which I have highlighted is heuristic:

Heuristic, is any approach to problem solving, learning, or discovery that employs a practical methodology not guaranteed to be optimal or perfect, but sufficient for the immediate goals.

Therefore, the optim function is trying to find a local minima in your case that will provide a fast sub-optimal solution (i.e. it doesn't find the global minima). This happens for speed and computational purposes.

So, by changing the initial values you change the starting point of the method and therefore each time you get a different local minima.

The rest of the methods work in a similar way but it makes sense to research each method to figure out whether a heuristic solutions is being calculated or not.

This is another similar question I found which gives a good explanation.

Upvotes: 3

Related Questions