Reputation: 65
I'm trying to produce a random forest model with the caret package, using area under the ROC curve as the train metric, but I get the following warning:
Warning message:
In train.default(x = TrainData, y = TrainClasses, method = "rf", :
The metric "ROC" was not in the result set. Accuracy will be used instead.
Clearly this is not what I'm after, but I can't figure out where I'm going wrong.
Here is a reproducible example:
library(caret)
library(doParallel)
library(data.table)
cl <- makeCluster(detectCores() - 1) # I'm using 3 cores.
registerDoParallel(cl)
data(iris)
iris <- iris[iris$Species != 'virginica',] # to get two categories
TrainData <- as.data.table(iris[,1:4]) # My data is a data.table.
TrainClasses <- as.factor(as.character(iris[,5])) # to reset the levels to the two remaining flower types.
ctrl <- trainControl(method = 'oob',
classProbs = TRUE,
verboseIter = TRUE,
summaryFunction = twoClassSummary,
allowParallel = TRUE)
model.fit <- train(x = TrainData,
y = TrainClasses,
method = 'rf',
metric = 'ROC',
tuneLength = 3,
trControl = ctrl)
The result is the same if I don't create the parallel cluster and set allowParallel = FALSE
.
In case it is of use, here's the result from a sessionInfo()
call:
R version 3.2.2 (2015-08-14)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=English_Australia.1252 LC_CTYPE=English_Australia.1252 LC_MONETARY=English_Australia.1252
[4] LC_NUMERIC=C LC_TIME=English_Australia.1252
attached base packages:
[1] parallel stats graphics grDevices utils datasets methods base
other attached packages:
[1] randomForest_4.6-10 data.table_1.9.6 doParallel_1.0.10 iterators_1.0.7 foreach_1.4.3
[6] caret_6.0-52 ggplot2_1.0.1 lattice_0.20-33
loaded via a namespace (and not attached):
[1] Rcpp_0.12.1 compiler_3.2.2 nloptr_1.0.4 plyr_1.8.3 tools_3.2.2
[6] digest_0.6.8 lme4_1.1-8 nlme_3.1-121 gtable_0.1.2 mgcv_1.8-7
[11] Matrix_1.2-2 brglm_0.5-9 SparseM_1.6 proto_0.3-10 BradleyTerry2_1.0-6
[16] stringr_1.0.0 gtools_3.5.0 stats4_3.2.2 grid_3.2.2 nnet_7.3-10
[21] minqa_1.2.4 reshape2_1.4.1 car_2.0-26 magrittr_1.5 scales_0.3.0
[26] codetools_0.2-14 MASS_7.3-44 splines_3.2.2 pbkrtest_0.4-2 colorspace_1.2-6
[31] quantreg_5.11 stringi_0.5-5 munsell_0.4.2 chron_2.3-47
Thanks. Looking forward to getting this fixed!
Upvotes: 4
Views: 5777
Reputation: 10954
You are correct. When you choose method = "oob"
, AUC-ROC is not one of the metrics that is returned.
You need to dig a little into the source code to figure out where the metrics are being computed. It is computed by method$oob
which is called by oobTrainWorkflow
on line 19, in turn called by train.default
on line 258. method
in your case is models$rf
, where the object models
is loaded from an external package file called models.RData
:
load(system.file("models", "models.RData", package = "caret"))
You can inspect the oob
method for models$rf
(which is the same as method
):
function(x) {
out <- switch(x$type,
regression = c(sqrt(max(x$mse[length(x$mse)], 0)), x$rsq[length(x$rsq)]),
classification = c(1 - x$err.rate[x$ntree, "OOB"],
e1071::classAgreement(x$confusion[,-dim(x$confusion)[2]])[["kappa"]]))
names(out) <- if(x$type == "regression") c("RMSE", "Rsquared") else c("Accuracy", "Kappa")
out
}
You can see that when classification RF is requested, only the accuracy and kappa metrics are computed.
You can tweak method$oob
to use method$prob(mod$fit)
and compute the AUC-ROC.
Upvotes: 3