Reputation: 5407
I have written vary basic sklearn code using logistic regression to predict the value.
Training data looks like -
https://gist.github.com/anonymous/563591e0395e8d988277d3ce63d7438f
date hr_of_day vals
01/05/2014 9 929
01/05/2014 10 942
01/05/2014 11 968
01/05/2014 12 856
01/05/2014 13 835
01/05/2014 14 885
01/05/2014 15 945
01/05/2014 16 924
01/05/2014 17 914
01/05/2014 18 744
01/05/2014 19 377
01/05/2014 20 219
01/05/2014 21 106
and I have selected first 8 items from training data to just validate the classifier which is
I want to predict the value of vals
, in testing data, I have put it as 0
. Is that correct?
date hr_of_day vals
2014-05-01 0 0
2014-05-01 1 0
2014-05-01 2 0
2014-05-01 3 0
2014-05-01 4 0
2014-05-01 5 0
2014-05-01 6 0
2014-05-01 7 0
My model code, works fine. But my result looks trange. I was expecting value of vals
in result. Rather then that, I am getting large matrix with all element value as 0.00030676
.
I appreciate if someone can give details or help me to play better with this result.
import pandas as pd
from sklearn import datasets
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from datetime import datetime, date, timedelta
Train = pd.read_csv("data_scientist_assignment.tsv", sep='\t', parse_dates=['date'])
Train['timestamp'] = Train.date.values.astype(pd.np.int64)
x1=["timestamp", "hr_of_day"]
test=pd.read_csv("test.tsv", sep='\t', parse_dates=['date'])
test['timestamp'] = test.date.values.astype(pd.np.int64)
print(Train.columns)
print(test.columns)
model = LogisticRegression()
model.fit(Train[x1], Train["vals"])
print(model)
print model.score(Train[x1], Train["vals"])
print model.predict_proba(test[x1])
results looks like this:
In [92]: print(model)
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=None, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
In [93]: print model.score(Train[x1], Train["vals"])
0.00520833333333
In [94]:
In [94]: print model.predict_proba(test[x1])
[[ 0.00030676 0.00030676 0.00030676 ..., 0.00030889 0.00030885
0.00030902]
[ 0.00030676 0.00030676 0.00030676 ..., 0.00030889 0.00030885
0.00030902]
[ 0.00030676 0.00030676 0.00030676 ..., 0.00030889 0.00030885
0.00030902]
...,
[ 0.00030676 0.00030676 0.00030676 ..., 0.00030889 0.00030885
0.00030902]
[ 0.00030676 0.00030676 0.00030676 ..., 0.00030889 0.00030885
0.00030902]
[ 0.00030676 0.00030676 0.00030676 ..., 0.00030889 0.00030885
0.00030902]]
Upvotes: 0
Views: 2749
Reputation: 12599
Use following code to get predicted labels:
predicted_labels= model.predict(test[x1])
Also try following example to understand logistic regression in sklearn:
# Logistic Regression
from sklearn import datasets
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
# load the iris datasets
dataset = datasets.load_iris()
# fit a logistic regression model to the data
model = LogisticRegression()
model.fit(dataset.data, dataset.target)
print(model)
# make predictions
expected = dataset.target
predicted = model.predict(dataset.data)
# summarize the fit of the model
print(metrics.classification_report(expected, predicted))
print(metrics.confusion_matrix(expected, predicted))
Example source: http://machinelearningmastery.com/get-your-hands-dirty-with-scikit-learn-now/
Upvotes: 1
Reputation: 913
Upvotes: 3