Simon Kiely
Simon Kiely

Reputation: 6050

Using ranking data in Logistic Regression

I am trying to use some ranking data in a logistic regression. I want to use machine learning to make a simple classifier as to whether a webpage is "good" or not. It's just a learning exercise so I don't expect great results; just hoping to learn the "process" and coding techniques.

I have put my data in a .csv as follows :

URL WebsiteText AlexaRank GooglePageRank

In my Test CSV we have :

URL WebsiteText AlexaRank GooglePageRank Label

Label is a binary classification indicating "good" with 1 or "bad" with 0.

I currently have my LR running using only the website text; which I run a TF-IDF on.

I have a two questions which I need help with:

Upvotes: 5

Views: 3474

Answers (2)

Daniel Mahler
Daniel Mahler

Reputation: 8203

Regarding normalizing the numeric ranks either scikit StandardScaler or a logarithmic transform (or both) should work well enough.

For building up a working pipeline, I find my sanity greatly benefits from using the Pandas package and the sklearn.pipeline utilities. Here is a simple script that should do what you need.

First a couple of utlitlty classes I always seem to need. It would be nice to have something like these in sklearn.pipeline or sklearn.utilities.

from sklearn import base
class Columns(base.TransformerMixin, base.BaseEstimator):
    def __init__(self, columns):
        super(Columns, self).__init__()
        self.columns_ = columns
    def fit(self, *args, **kwargs):
        return self
    def transform(self, X, *args, **kwargs):
        return X[self.columns_]

class Text(base.TransformerMixin, base.BaseEstimator):
    def fit(self, *args, **kwargs):
        return self
    def transform(self, X, *args, **kwargs):
        return (X.apply("\t".join, axis=1, raw=False))

Now set up the pipeline. I used the SGDClassifier implementation of logistic regression since it tends to be more eficcient for high dimensional data like text classification also I usually find that hinge loss usually gives better results than logistic regression anyway.

from sklearn import linear_model as lin
from sklearn import  metrics
from sklearn.feature_extraction import text as txt
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.preprocessing import StandardScaler
from sklearn import preprocessing as prep
import numpy as np
from pandas.io import parsers
import pandas as pd

pipe = Pipeline([
    ('feat', FeatureUnion([
        ('txt', Pipeline([
            ('txtcols', Columns(["WebsiteText"])),
            ('totxt', Text()),
            ('vect', txt.TfidfVectorizer()),
            ])),
        ('num', Pipeline([
            ('numcols', Columns(["AlexaRank", "GooglePageRank"])),
            ('scale', prep.StandardScaler()),
            ])),
        ])),
    ('clf', lin.SGDClassifier(loss="log")),
    ])

Next train the model:

train=parsers.read_csv("train.csv")
pipe.fit(train, train.Label)

Finally evaluate on test data:

test=parsers.read_csv("test.csv")
tstlbl=np.array(test.Label)

print pipe.score(test, tstlbl)

pred = pipe.predict(test)
print metrics.confusion_matrix(tstlbl, pred)
print metrics.classification_report(tstlbl, pred)
print metrics.f1_score(tstlbl, pred)

prob = pipe.decision_function(test)
print metrics.roc_auc_score(tstlbl, prob)
print metrics.average_precision_score(tstlbl, prob)

You will probably not get very good results with everything using default setting like this, but it should give you a working baseline to work from. I can suggest some parameter settings that usually work for me if you like.

Upvotes: 2

Sudeep Juvekar
Sudeep Juvekar

Reputation: 5098

I guess sklearn.preprocessing.StandardScaler would be the first thing you want to try. StandardScaler transforms all of your features into Mean-0-Std-1 features.

  • This definitely gets rid of your first problem. AlexaRank will be guaranteed to be spread around 0 and bounded. (Yes, even massive AlexaRank values like 83904803289480 are transformed to small floating point numbers). Of course, the results will not be integers between 1 and 10000 but they will maintain same order as the original ranks. And in this case, keeping the rank bounded and normalized will help solve your second problem like follows.
  • In order to understand why normalization would help in LR, let's revisit the logit formulation of LR. enter image description here
    In your case, X1, X2, X3 are three TF-IDF features and X4, X5 are Alexa/Google rank related features. Now, the linear form of equation suggest that the coefficients represent the change in logit of y with one unit change in a variable. Think what happens when your X4 is kept fixed at a massive rank value, say 83904803289480. In that case, the Alexa Rank variable dominates your LR fit and a small change in TF-IDF value has almost no effect on the LR fit. Now one might think that the coefficient should be able to adjust to small/large values to account for differences between these features. Not in this case --- It's not only the magnitude of variables that matter but also their range. Alexa Rank definitely has a large range and should definitely dominate your LR fit in this case. Therefore, I guess normalizing all variables using StandardScaler to adjust their range will improve the fit.

Here is how you can scale the X matrix.

sc = proprocessing.StandardScaler().fit(X)
X = sc.transform(X)

Don't forget to use same scaler to transform X_test.

X_test = sc.transform(X_test)

Now you can use the fitting procedure etc.

rd.fit(X, y)
re.predict_proba(X_test)

Check this out for more on sklearn preprocessing: http://scikit-learn.org/stable/modules/preprocessing.html

Edit: Parsing and column merging part can be easily done using pandas, i.e., there is no need to convert the matrices into list and then append them. Moreover, pandas dataframes can be directly indexed by their column names.

AlexaAndGoogleTrainData = p.read_table('train.tsv', header=0)[["AlexaRank", "GooglePageRank"]]
AlexaAndGoogleTestData = p.read_table('test.tsv', header=0)[["AlexaRank", "GooglePageRank"]]
AllAlexaAndGoogleInfo = AlexaAndGoogleTestData.append(AlexaAndGoogleTrainData)

Note that we are passing header=0 argument to read_table to maintain original header names from tsv file. And also note how we can index using entire set of columns. Finally, you can stack this new matrix with X using numpy.hstack.

X = np.hstack((X, AllAlexaAndGoogleInfo))

hstack horizontally combined two multi-dimensional array-like structures provided their lengths are same.

Upvotes: 5

Related Questions