Abdul Rehman
Abdul Rehman

Reputation: 5644

Sklearn Python Log Loss for Logistic Regression evaluation raised an error

I have trained a model using Logistic Regression and need to evaluate its accuracy with Log Loss. Here's some details about the data:

Features/ X

   Principal terms age Gender weekend Bachelor  HighSchoolerBelow college
0   1000     30    45   0       0       0               1              0
1   1000     30    33   1       0       1               0              0
2   1000     15    27   0       0       0               0              1

Labels/ Y

array(['PAIDOFF', 'PAIDOFF', 'PAIDOFF', 'PAIDOFF', 'COLLECTION'], dtype=object)

Logistic Regression Model:

from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(C=1e5, solver='lbfgs', multi_class='multinomial')
Feature = df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1,inplace=True)

X = Feature

X= preprocessing.StandardScaler().fit(X).transform(X)

y = df['loan_status'].values

X_train, X_test, y_train, lg_y_test = train_test_split(X, y, test_size=0.3, random_state=4)


# we create an instance of Neighbours Classifier and fit the data.
logreg.fit(X_train, y_train)

lg_loan_status = logreg.predict(X_test)
lg_loan_status

Now I need to calculate the Jaccard, F1-score and LogLoss for that.

Here's my separate testing dataset:

test_df['due_date'] = pd.to_datetime(test_df['due_date'])
test_df['effective_date'] = pd.to_datetime(test_df['effective_date'])
test_df['dayofweek'] = test_df['effective_date'].dt.dayofweek
test_df['weekend'] = test_df['dayofweek'].apply(lambda x: 1 if (x>3)  else 0)
test_df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
# test_df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
Feature = test_df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1,inplace=True)
Feature.head()

X = Feature
Y = test_df['loan_status'].values

Feature.head()
    Principal terms age Gender weekend Bechalor HighSchoolorBelow  college
0   1000.0    30.0  50.0 female  0.0    0               1            0
1   300.0      7.0  35.0  male   1.0    1               0            0
2   1000.0    30.0  43.0 female  1.0    0               0            1

Here's what I have tried:

# Evaluation for Logistic Regression
X_train, X_test, y_train, lg_y_test = train_test_split(X, y, test_size=0.3, random_state=3)

lg_jaccard = jaccard_similarity_score(lg_y_test, lg_loan_status, normalize=False)
lg_f1_score = f1_score(lg_y_test, lg_loan_status, average='micro')


lg_log_loss = log_loss(lg_y_test, lg_loan_status)

print('Jaccard is : {}'.format(lg_jaccard))
print('F1-score is : {}'.format(lg_f1_score))
print('Log Loss is : {}'.format(lg_log_loss))

But it returns this error:

ValueError: could not convert string to float: 'COLLECTION'

Update: Here's the lg_y_test:

['PAIDOFF' 'PAIDOFF' 'COLLECTION' 'COLLECTION' 'PAIDOFF' 'COLLECTION'
'PAIDOFF' 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF'
 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'COLLECTION'
 'PAIDOFF' 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF'
 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'COLLECTION' 'PAIDOFF'
 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'COLLECTION' 'PAIDOFF' 'COLLECTION'
 'COLLECTION' 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'COLLECTION' 'PAIDOFF'
 'PAIDOFF' 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'COLLECTION' 'COLLECTION'
 'PAIDOFF' 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'COLLECTION' 'COLLECTION'
 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF'
 'COLLECTION' 'COLLECTION' 'PAIDOFF' 'COLLECTION' 'PAIDOFF' 'PAIDOFF'
 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF'
 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF'
 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF'
 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'COLLECTION'
 'COLLECTION' 'PAIDOFF' 'PAIDOFF' 'PAIDOFF' 'COLLECTION' 'PAIDOFF'
 'PAIDOFF' 'PAIDOFF' 'COLLECTION']

Upvotes: 4

Views: 7566

Answers (2)

Charlie Parker
Charlie Parker

Reputation: 5201

To compute the log loss or the Cross Entropy loss for logistic regression do this (self contained example):

from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn import metrics

X, y = load_iris(return_X_y=True)
clf = LogisticRegression(random_state=0).fit(X, y)
clf.predict(X[:2, :])

clf.predict_proba(X[:2, :])


clf.score(X, y)

y_probs = cls.predict_proba(X)
qry_loss_t = metrics.log_loss(y, y_probs)

references:

Upvotes: 1

Gabriel M
Gabriel M

Reputation: 1514

The problem is the following:

To compute log_loss you need to have the probabilities of your predictions. If you provide only the predicted classes ( the classes with the max probability) this metric cannot be computed.

Sklearn provides a predict_proba method, whenever it is possible. You should use it as follows:

lg_loan_status_probas = logreg.predict_proba(X_test)
lg_log_loss = log_loss(lg_y_test, lg_loan_status_probas)

Upvotes: 4

Related Questions