Reputation: 37
I'm trying to apply the RandomForest
method to a dataset and I get this error:
ValueError: Input contains NaN, infinity or a value too large for dtype ('float32')
Could someone tell me what I can modify in the function for the code to work:
def ranks_RF(x_train, y_train, features_train, RESULT_PATH='Results'):
"""Get ranks from Random Forest"""
print("\nMétodo_Random_Forest")
random_forest = RandomForestRegressor(n_estimators=10)
np.nan_to_num(x_train)
np.nan_to_num(y_train)
random_forest.fit(x_train, y_train)
# Get rank by doing two times a sort.
imp_array = np.array(random_forest.feature_importances_)
imp_order = imp_array.argsort()
ranks = imp_order.argsort()
# Plot Random Forest
imp = pd.Series(random_forest.feature_importances_, index=x_train.columns)
imp = imp.sort_values()
imp.plot(kind="barh")
plt.xlabel("Importance")
plt.ylabel("Features")
plt.title("Feature importance using Random Forest")
# plt.show()
plt.savefig(RESULT_PATH + '/ranks_RF.png', bbox_inches='tight')
return ranks
Upvotes: 1
Views: 3652
Reputation: 97
This worked for me
np.where(x.values >= np.finfo(np.float32).max)
Where x is my pandas Dataframe Then Convert your DataFrame to Float32 if it's not
Upvotes: 0
Reputation: 47008
You did not overwrite the values when you replaced the nan, hence it's giving you the errors.
We try an example dataset:
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import load_iris
iris = load_iris()
df = pd.DataFrame(data= iris['data'],
columns= iris['feature_names'] )
df['target'] = iris['target']
# insert some NAs
df = df.mask(np.random.random(df.shape) < .1)
We have a function like yours, I removed the plotting part, because that's another question altogether:
def ranks_RF(x_train, y_train):
var_names = x_train.columns
random_forest = RandomForestRegressor(n_estimators=10)
# here you have to reassign back the values
x_train = np.nan_to_num(x_train)
y_train = np.nan_to_num(y_train)
random_forest.fit(x_train, y_train)
res = pd.DataFrame({
"features":var_names,
"importance":random_forest.feature_importances_,
})
res = res.sort_values(['importance'],ascending=False)
res['rank'] = np.arange(len(res))+1
return res
We run it:
ranks_RF(df.iloc[:,0:4],df['target'])
features importance rank
3 petal width (cm) 0.601734 1
2 petal length (cm) 0.191613 2
0 sepal length (cm) 0.132212 3
1 sepal width (cm) 0.074442
Upvotes: 2