Reputation: 21
Using mlr package in R, I am creating random forest models. To evaluate classification accuracy of the model I am using nested resampling as described in here. My problem is that classification accuracies of the random forest models within the inner loops are usually 15% higher than the outer loop results. I am observing classification accuracies of ~85% within the inner loop, but the accuracy of the outer loop usually ends up around 70%. I cannot provide data here but I am pasting the code I am using.
How is that possible? What may be the reason?
rf_param_set <- makeParamSet(
ParamHelpers::makeDiscreteParam('mtry', values = c(3, 7, 14)),
ParamHelpers::makeDiscreteParam('ntree', values = c(1000, 2000))
)
rf_tune_ctrl <- makeTuneControlGrid()
rf_inner_resample <- makeResampleDesc('Bootstrap', iters = 5)
acc632plus <- setAggregation(acc, b632plus)
rf_learner <- makeTuneWrapper('classif.randomForest',
resampling = rf_inner_resample,
measures = list(acc),
par.set = rf_param_set,
control = rf_tune_ctrl,
show.info = TRUE)
# rf_outer_resample <- makeResampleDesc('Subsample', iters = 10, split = 2/3)
rf_outer_resample <- makeResampleDesc('Bootstrap', iters = 10, predict = 'both')
rf_result_resample <- resample(rf_learner, clf_task,
resampling = rf_outer_resample,
extract = getTuneResult,
measures = list(acc, acc632plus),
show.info = TRUE)
You can the resulting output below.
Resampling: OOB bootstrapping
Measures: acc.train acc.test acc.test
[Tune] Started tuning learner classif.randomForest for parameter set:
Type len Def Constr Req Tunable Trafo
mtry discrete - - 3,7,14 - TRUE -
ntree discrete - - 1000,2000 - TRUE -
With control class: TuneControlGrid
Imputation value: -0
[Tune-x] 1: mtry=3; ntree=1000
[Tune-y] 1: acc.test.mean=0.8415307; time: 0.1 min
[Tune-x] 2: mtry=7; ntree=1000
[Tune-y] 2: acc.test.mean=0.8405726; time: 0.1 min
[Tune-x] 3: mtry=14; ntree=1000
[Tune-y] 3: acc.test.mean=0.8330845; time: 0.1 min
[Tune-x] 4: mtry=3; ntree=2000
[Tune-y] 4: acc.test.mean=0.8415809; time: 0.3 min
[Tune-x] 5: mtry=7; ntree=2000
[Tune-y] 5: acc.test.mean=0.8395083; time: 0.3 min
[Tune-x] 6: mtry=14; ntree=2000
[Tune-y] 6: acc.test.mean=0.8373584; time: 0.3 min
[Tune] Result: mtry=3; ntree=2000 : acc.test.mean=0.8415809
[Resample] iter 1: 0.9961089 0.7434555 0.7434555
[Tune] Started tuning learner classif.randomForest for parameter set:
Type len Def Constr Req Tunable Trafo
mtry discrete - - 3,7,14 - TRUE -
ntree discrete - - 1000,2000 - TRUE -
With control class: TuneControlGrid
Imputation value: -0
[Tune-x] 1: mtry=3; ntree=1000
[Tune-y] 1: acc.test.mean=0.8479891; time: 0.1 min
[Tune-x] 2: mtry=7; ntree=1000
[Tune-y] 2: acc.test.mean=0.8578465; time: 0.1 min
[Tune-x] 3: mtry=14; ntree=1000
[Tune-y] 3: acc.test.mean=0.8556608; time: 0.1 min
[Tune-x] 4: mtry=3; ntree=2000
[Tune-y] 4: acc.test.mean=0.8502869; time: 0.3 min
[Tune-x] 5: mtry=7; ntree=2000
[Tune-y] 5: acc.test.mean=0.8601446; time: 0.3 min
[Tune-x] 6: mtry=14; ntree=2000
[Tune-y] 6: acc.test.mean=0.8586638; time: 0.3 min
[Tune] Result: mtry=7; ntree=2000 : acc.test.mean=0.8601446
[Resample] iter 2: 0.9980545 0.7032967 0.7032967
[Tune] Started tuning learner classif.randomForest for parameter set:
Type len Def Constr Req Tunable Trafo
mtry discrete - - 3,7,14 - TRUE -
ntree discrete - - 1000,2000 - TRUE -
With control class: TuneControlGrid
Imputation value: -0
[Tune-x] 1: mtry=3; ntree=1000
[Tune-y] 1: acc.test.mean=0.8772566; time: 0.1 min
[Tune-x] 2: mtry=7; ntree=1000
[Tune-y] 2: acc.test.mean=0.8750990; time: 0.1 min
[Tune-x] 3: mtry=14; ntree=1000
[Tune-y] 3: acc.test.mean=0.8730733; time: 0.1 min
[Tune-x] 4: mtry=3; ntree=2000
[Tune-y] 4: acc.test.mean=0.8782829; time: 0.3 min
[Tune-x] 5: mtry=7; ntree=2000
[Tune-y] 5: acc.test.mean=0.8741619; time: 0.3 min
[Tune-x] 6: mtry=14; ntree=2000
[Tune-y] 6: acc.test.mean=0.8687918; time: 0.3 min
[Tune] Result: mtry=3; ntree=2000 : acc.test.mean=0.8782829
[Resample] iter 3: 0.9902724 0.7329843 0.7329843
Upvotes: 0
Views: 261
Reputation: 109232
What you're seeing is exactly the reason you want to use nested resampling -- the inner resampling loop overfits (to some extent) to the data, and gives a misleading impression of the generalization performance. With the outer resampling in place, you can detect that (accuracy is lower).
The mlr tutorial has a much more detailed page on this (https://mlr.mlr-org.com/articles/tutorial/nested_resampling.html). In general, you're not seeing these results because you're doing anything wrong (unless you split the data manually in a certain way), you're just using a powerful optimization method that optimizes a bit more than it should -- but you're detecting that with the nested resampling.
You could try to use cross-validation instead of bootstrapping; this may provide more consistent results.
Upvotes: 2