Raed Shabbir
Raed Shabbir

Reputation: 66

sci-kit learn: random forest classifier giving ValueError

So my random forest classifier was running just fine until I added in new features. I keep getting the following error code when I try to run it:

   \Anaconda2\lib\site-packages\sklearn\utils\validation.pyc in _assert_all_finite(X)
         56             and not np.isfinite(X).all()):
         57         raise ValueError("Input contains NaN, infinity"
    ---> 58                          " or a value too large for %r." % X.dtype)
         59 
         60 

    ValueError: Input contains NaN, infinity or a value too large for dtype('float32').

train and test are both np.DataFrame objects read from csv files. I'm trying to add some more features for a better predictor, but end up getting the above error whenever I try fitting. I did try removing NaN and infinite values but still get the same error.

Below is my code:

from sklearn.ensemble import RandomForestClassifier 
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss
def features(df):
    df["num_photos"] = df["photos"].apply(len)
    df["num_features"] = df["features"].apply(len)
    df["year_created"] = df["created"].dt.year
    df["month_created"] = df["created"].dt.month
    df["day_created"] = df["created"].dt.day
    df["desc_len"] = df["description"].apply(lambda x: len(x.split(" ")))
    #New features begin here 
    df["pricePerBed"] = df['price'] / df['bedrooms'] 
    df["pricePerBath"] = df['price'] / df['bathrooms']
    df["pricePerRoom"] = df['price'] / (df['bedrooms'] + df['bathrooms'])
    df["bedPerBath"] = df['bedrooms'] / df['bathrooms']
    df["bedBathDiff"] = df['bedrooms'] - df['bathrooms']
    df["bedBathSum"] = df["bedrooms"] + df['bathrooms']
    df["bedsPerc"] = df["bedrooms"] / (df['bedrooms'] + df['bathrooms'])

    df = df.replace([np.inf, -np.inf], np.nan)
    df = df.fillna(1)

    return df

features(train)
features(test)

key_features = ["bathrooms", "bedrooms", "latitude", "longitude", "year_created", 
                "month_created", "day_created", "price", "num_photos", "num_features", "desc_len",
                "pricePerBed", 
                "pricePerBath", 
                "pricePerRoom", 
                #"bedPerBath", 
                "bedBathDiff", 
                "bedBathSum"]

X = train[key_features]
y = train["interest_level"]

X.fillna(1) #I tried getting rid of NaN

X.isnull().any()

The bedPerBath variable was giving a True for isnull().any() so I left it out, and the rest all gave me False. However, when I try to fit the estimator I still get the "ValueError".

X_train, X_cv, y_train, y_cv = train_test_split(X, y, test_size = 0.3)

X_train.isnull().any()

clfRF = RandomForestClassifier(n_estimators = 1000)
clfRF.fit(X_train, y_train)

#CV
y_cv_pred = clfRF.predict_proba(X_cv)
log_loss(y_cv, y_cv_pred)

I noticed the error message says too large for dtype('float32'), while my values are largely float64, could this be leading to the error? If so why?

Thank you.

Upvotes: 2

Views: 802

Answers (1)

Abhishek Thakur
Abhishek Thakur

Reputation: 17035

try:

import numpy as np
X_train, X_cv, y_train, y_cv = train_test_split(np.nan_to_num(X), y, test_size = 0.3)

clfRF = RandomForestClassifier(n_estimators = 1000)
clfRF.fit(X_train, y_train)

#CV
y_cv_pred = clfRF.predict_proba(X_cv)
log_loss(y_cv, y_cv_pred)

Upvotes: 2

Related Questions