Daniel Soutar
Daniel Soutar

Reputation: 886

Seriously weird ROC curve

So I have a very challenging dataset to work with, but even with that in mind the ROC curves I am getting as a result seem quite bizarre and looks wrong.

Below is my code - I have used the scikitplot library (skplt) for plotting ROC curves after passing in my predictions and the ground truth labels so I cannot reasonably be getting that wrong. Is there something crazily obvious that I am missing here?

# My dataset - note that m (number of examples) is 115. These are histograms that are already
# summed to 1 so I am doubtful that further preprocessing is necessary.
X, y = load_new_dataset(positives, positive_files, m=115, upper=21, range_size=10, display_plot=False)

# Partition - class balance is 0.87 : 0.13 for negative and positive classes respectively
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.10, stratify=y)

# Pick a baseline classifier - Naive Bayes
nb = GaussianNB()

# Very large class imbalance, so use stratified K-fold cross-validation.
cross_val = StratifiedKFold(n_splits=10)

# Use RFE for feature selection
est = SVR(kernel="linear")
selector = feature_selection.RFE(est)

# Create pipeline, nothing fancy here
clf = Pipeline(steps=[("feature selection", selector), ("classifier", nb)])

# Score using F1-score due to class imbalance - accuracy unlikely to be meaningful
scores = cross_val_score(clf, X_train, y_train, cv=cross_val, 
scoring=make_scorer(f1_score, average='micro'))

# Fit and make predictions. Use these to plot ROC curves.
print(scores)
clf.fit(X_train, y_train)
y_pred = clf.predict_proba(X_test)
skplt.metrics.plot_roc_curve(y_test, y_pred)
plt.show()

And below is the starkly binary ROC curve:

ROC curve for the code snippet

I understand that I can't expect outstanding performance with such a challenging dataset, but even so I cannot fathom why I am getting such a binary result, particularly for the ROC curves of the individual classes. No, I cannot get more data, although I sincerely wish I could. If this really is valid code, then I will just have to make do with it and perhaps report the micro-average F1 score, which does not look too bad.

For reference, using the make_classification function from sklearn in the code snippet below, I get the following ROC curve:

# Randomly generate a dataset with similar characteristics (size, class balance, 
# num_features)
X, y = make_classification(n_samples=103, n_features=21, random_state=0, n_classes=2, \
                           weights=[0.87, 0.13], n_informative=5, n_clusters_per_class=3)

positives = np.where(y == 1)

X_minority, X_majority, y_minority, y_majority = np.take(X, positives, axis=0), \
                                                 np.delete(X, positives, axis=0), \
                                                 np.take(y, positives, axis=0), \
                                                 np.delete(y, positives, axis=0)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.10, stratify=y)


# Cross-validation again
cross_val = StratifiedKFold(n_splits=10)

# Use Naive Bayes again for consistency
clf = GaussianNB()

# Likewise for the evaluation metric
scores = cross_val_score(clf, X_train, y_train, cv=cross_val, \
                         scoring=make_scorer(f1_score, average='micro'))

print(scores)

# Fit, predict, plot results
clf.fit(X_train, y_train)
y_pred = clf.predict_proba(X_test)
skplt.metrics.plot_roc_curve(y_test, y_pred)
plt.show()

ROC curve for randomly generated dataset using make_classification

Am I doing something wrong? Or is this what I should expect given these characteristics?

Upvotes: 0

Views: 1471

Answers (1)

Daniel Soutar
Daniel Soutar

Reputation: 886

Thanks to Stev's kind suggestion of increasing the test size, the resulting curves I ended up getting were far smoother and exhibited much less variance. Using SMOTE in this case was also very helpful and I would advise it (using imblearn perhaps) for anyone else with a similar issue.

Upvotes: 1

Related Questions