PleaseHelp
PleaseHelp

Reputation: 124

AUC for Random Forest - different methods, different answers?

I'm trying to find a single method to give me AUC for a random forest model for both the training and testing sets without using MLeval.

Here's a good example for ROC on training data, and here's a good example for ROC on testing data. The first example for AUC for training data gives AUC=0.944.

plot.roc(rfFit$pred$obs[selectedIndices],
         rfFit$pred$M[selectedIndices], print.auc=TRUE)
Since I don't know how to adapt the first example for testing data, I applied the Sonar data to the second example and cross verify the answer with the first example

ctrl <- trainControl(method="cv", 
                     summaryFunction=twoClassSummary, 
                     classProbs=T,
                     savePredictions = T)
rfFit <- train(Class ~ ., data=Sonar, 
               method="rf", preProc=c("center", "scale"), 
               trControl=ctrl, metric="ROC")
print(rfFit)
...
  mtry  ROC        Sens       Spec     
   2    0.9459428  0.9280303  0.8044444

result.predicted.prob <- predict(rfFit, Sonar, type="prob") # Prediction

result.roc <- roc(Sonar$Class, result.predicted.prob$M)
plot(result.roc, print.thres="best", print.thres.best.method="closest.topleft", print.auc=TRUE)

 

But that AUC for the entire training data (i.e. Sonar) is 1.0, while rfFit shows 0.946 which is also different! So why am I getting different results and what's the correct way to calculate AUC for both training and testing?

Upvotes: 1

Views: 1421

Answers (1)

StupidWolf
StupidWolf

Reputation: 46908

It's AUC from different models.

The first AUC you see, is an average AUC from your training through cross-validation. You can see it under:

head(rfFit$resample)
        ROC      Sens      Spec Resample
1 1.0000000 0.9090909 1.0000000   Fold02
2 0.9949495 1.0000000 0.7777778   Fold01
3 0.8045455 0.8181818 0.5000000   Fold03
4 1.0000000 1.0000000 0.8000000   Fold06
5 0.9595960 0.9090909 0.6666667   Fold05
6 0.9909091 0.9090909 0.9000000   Fold04

mean(rfFit$resample$ROC)
[1] 0.9540909

In this case, it's 10 fold cross-validation, you train 90% of the data and test on 10%, hence it's a slightly different model with every fold, and therefore different AUC.

If you take the prediction of the final model trained, you get the AUC of 1, and this is not included in the caret output.

So, it depends on what your AUC should reflect. If it is average AUC during CV train, then use the ROC value from caret. If you just need 1 value to reflect the accuracy of the final model, then your second method is ok.

Upvotes: 2

Related Questions