Gulzar
Gulzar

Reputation: 27896

How to plot ROC and calculate AUC for binary classifier with no probabilities (svm)?

I have some SVM classifier (LinearSVC) outputting final classifications for every sample in the test set, something like

1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1

and so on.

The "truth" labels is also something like

1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1

I would like to run that svm with some parameters, and generate points for the roc curve, and calculate auc.

I could do this by myself, but I am sure someone did it before me for cases like this.

Unfortunately, everything I can find is for cases where the classifier returns probabilities, rather than hard estimations, like here or here

I thought this would work, but from sklearn.metrics import plot_roc_curve is not found!

anything online that fits my case?

Thanks

Upvotes: 1

Views: 3361

Answers (2)

sentence
sentence

Reputation: 8903

You could get around the problem by using sklearn.svm.SVC and setting the probability parameter to True.

As you can read:

probability: boolean, optional (default=False)

Whether to enable probability estimates. This must be enabled prior to calling fit, will slow down that method as it internally uses 5-fold cross-validation, and predict_proba may be inconsistent with predict. Read more in the User Guide.

As an example (details omitted):

from sklearn.svm import SVC
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score

.
.
.

model = SVC(kernel="linear", probability=True)
model.fit(X_train, y_train)

.
.
.

decision_scores = model.decision_function(X_test)
fpr, tpr, thres = roc_curve(y_test, decision_scores)
print('AUC: {:.3f}'.format(roc_auc_score(y_test, decision_scores)))

# roc curve
plt.plot(fpr, tpr, "b", label='Linear SVM')
plt.plot([0,1],[0,1], "k--", label='Random Guess')
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc="best")
plt.title("ROC curve")
plt.show()

and you should get something like this:

enter image description here


NOTE that LinearSVC is MUCH FASTER than SVC(kernel="linear"), especially if the training set is very large or plenty of features.

Upvotes: 3

CAFEBABE
CAFEBABE

Reputation: 4101

You can use decision function here

from sklearn.svm import LinearSVC
from sklearn.datasets import make_classification
X, y = make_classification(n_features=4, random_state=0)
clf = LinearSVC(random_state=0, tol=1e-5)
clf.fit(X, y)
LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True,
     intercept_scaling=1, loss='squared_hinge', max_iter=1000,
     multi_class='ovr', penalty='l2', random_state=0, tol=1e-05, verbose=0)

print(clf.predict([[0, 0, 0, 0]]))
#>>[1]
print(clf.decision_function([[0, 0, 0, 0]]))
#>>[ 0.2841757]

The cleanest way would be to use Platt scaling to convert the distance to hyperplane as given by decision_function into a probability.

However, quick and dirty

[math.tanh(v)/2+0.5 for v in clf.decision_function([[0, 0, 0, 0],[1,1,1,1]])]
#>>[0.6383826839666699, 0.9635586809605969]

As Platts scaling is preserves the order of the example the result in the roc curve will be consistent.

In addition: Platt’s method is also known to have theoretical issues. If confidence scores are required, but these do not have to be probabilities, then it is advisable to set probability=False and use decision_function instead of predict_proba.

Upvotes: 1

Related Questions