Reputation: 8903
I'm using sklearn.metrics.cohen_kappa_score
to evaluate my module. The function weights can be None , 'linear' or 'quadratic'
I would like to override the function in order to be able to send custom weights matrix. how can it be done?
def cohen_kappa_score(y1, y2, *, labels=None, weights=None,
sample_weight=None):
confusion = confusion_matrix(y1, y2, labels=labels,
sample_weight=sample_weight)
n_classes = confusion.shape[0]
sum0 = np.sum(confusion, axis=0)
sum1 = np.sum(confusion, axis=1)
expected = np.outer(sum0, sum1) / np.sum(sum0)
if type(w_mat) != np.ndarray: # <------------------------- line I want to add
if weights is None:
w_mat = np.ones([n_classes, n_classes], dtype=int)
w_mat.flat[:: n_classes + 1] = 0
elif weights == "linear" or weights == "quadratic":
w_mat = np.zeros([n_classes, n_classes], dtype=int)
w_mat += np.arange(n_classes)
if weights == "linear":
w_mat = np.abs(w_mat - w_mat.T)
else:
w_mat = (w_mat - w_mat.T) ** 2
else:
raise ValueError("Unknown kappa weighting type.")
k = np.sum(w_mat * confusion) / np.sum(w_mat * expected)
return 1 - k
Upvotes: 1
Views: 203
Reputation: 14072
You can either go with make_scorer
as per showed by @Antoine in the other answer, or you can override the function itself:
import numpy as np
import sklearn.metrics as sm
from sklearn.metrics import confusion_matrix
def custom_cohen_kappa_score(y1, y2, *, labels=None, weights=None, sample_weight=None):
print("This is the custom function")
confusion = confusion_matrix(y1, y2, labels=labels,
sample_weight=sample_weight)
n_classes = confusion.shape[0]
sum0 = np.sum(confusion, axis=0)
sum1 = np.sum(confusion, axis=1)
expected = np.outer(sum0, sum1) / np.sum(sum0)
if weights is None:
w_mat = np.ones([n_classes, n_classes], dtype=int)
w_mat.flat[:: n_classes + 1] = 0
elif weights == "linear" or weights == "quadratic":
w_mat = np.zeros([n_classes, n_classes], dtype=int)
w_mat += np.arange(n_classes)
if weights == "linear":
w_mat = np.abs(w_mat - w_mat.T)
else:
w_mat = (w_mat - w_mat.T) ** 2
else:
raise ValueError("Unknown kappa weighting type.")
k = np.sum(w_mat * confusion) / np.sum(w_mat * expected)
return 1 - k
# override it
sm.cohen_kappa_score = custom_cohen_kappa_score
# Test: Here every time `cohen_kappa_score` is called,
# the custom one will be invoked instead!
from sklearn.metrics import cohen_kappa_score
y_true = [2, 0, 2, 2, 0, 1]
y_pred = [0, 0, 2, 2, 0, 2]
print(cohen_kappa_score(y_true, y_pred))
This is the custom function
0.4285714285714286
Upvotes: 1
Reputation: 5304
The best option would be to encapsulate your own scoring function using sklearn.metrics.make_scorer
in order to use it for GridSearchCV
and cross_val_score
.
as follow:
from sklearn.metrics import make_scorer
weighted_cohen_kappa_score = make_scorer(custom_cohen_kappa,
greater_is_better=True,
needs_proba=False,
needs_threshold=False
)
Where custom_cohen_kappa
is you custom scoring function defined in your question.
Upvotes: 0