R nloptr problem rebuilding linear regression: optimization stops too early

145 views Asked by At

I try to rebuild a multiple linear regression with nloptr, but optimization stops too early. Why is this? (I know nloptr is for non-linear models, but still I would expect the correct solution, as e.g. optim package does)

model <- lm(mpg ~ cyl + hp + wt, data = mtcars)
summary(model)
model$coefficients


Call:
lm(formula = mpg ~ cyl + hp + wt, data = mtcars)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.9290 -1.5598 -0.5311  1.1850  5.8986 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 38.75179    1.78686  21.687  < 2e-16 ***
cyl         -0.94162    0.55092  -1.709 0.098480 .  
hp          -0.01804    0.01188  -1.519 0.140015    
wt          -3.16697    0.74058  -4.276 0.000199 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.512 on 28 degrees of freedom
Multiple R-squared:  0.8431,    Adjusted R-squared:  0.8263 
F-statistic: 50.17 on 3 and 28 DF,  p-value: 2.184e-11

So with classical lm R-squared is 0.8431, now rebuilding it with nloptr:

preds <- c('cyl','hp','wt')

opty_func <- function(para){
  mtcars[,paste0(preds,'_pred')] <- mapply(function(preds,b)
                  {preds*b}, 
                  mtcars[,preds], 
                  b = para[2:4])
  
  mtcars[,'predsum'] <- + para[1] + rowSums(mtcars[,c(paste0(preds,'_pred'))])
  
  sst <- sum((mtcars$mpg - mean(mtcars$mpg))^2)
  sse <- sum((mtcars$predsum - mtcars$mpg)^2)
  rsq <- 1 - (sse / sst)
  -rsq
}



#To prove that the function is correct run
opty_func(model$coefficients)
[1] -0,84315


optmodel <- nloptr::nloptr(x0=rep(0,4),                         
                         eval_f=opty_func,
                         lb = rep(-Inf,4),
                         ub = rep(Inf,4), 
                     opts = list("algorithm"="NLOPT_LN_AUGLAG",
                                 "xtol_rel"=1.0e-10,"ftol_abs"=1.0e-10,
                                 "maxeval" = 1000,
                                 "local_opts" = list("algorithm" = "NLOPT_LN_AUGLAG", "xtol_rel" = 1.0e-10,"ftol_abs"=1.0e-10)))
optmodel

Result:

    Call:
nloptr::nloptr(x0 = rep(0, 4), eval_f = opty_func, lb = rep(-Inf, 
    4), ub = rep(Inf, 4), opts = list(algorithm = "NLOPT_LN_AUGLAG", 
    xtol_rel = 1e-10, ftol_abs = 1e-10, maxeval = 1000, local_opts = list(algorithm = "NLOPT_LN_AUGLAG", 
        xtol_rel = 1e-10, ftol_abs = 1e-10)))


Minimization using NLopt version 2.4.2 

NLopt solver status: 3 ( NLOPT_FTOL_REACHED: Optimization stopped because ftol_rel or ftol_abs (above) was reached. )

Number of Iterations....: 102 
Termination conditions:  xtol_rel: 1e-10    ftol_abs: 1e-10 maxeval: 1000 
Number of inequality constraints:  0 
Number of equality constraints:    0 
Optimal value of objective function:  2,4169018891219 
Optimal value of controls: 1,64369 1,088852 0,02969905 1,021684

How can I force nloptr to run more iterations? THe status message is confusing as the tolerance cannot be reached, or does it at 2,41 vs. -0.83 ? BTW: optim package finds the correct R-Square (but is slower).

1

There are 1 answers

0
Luitpold Wienerle On

I found out that the answer on my question is: Use "algorithm"="NLOPT_LN_SBPLX"