Reputation: 880
When I plot the feature importance, I get this messy plot. I have more than 7000 variables. I understand the built-in function only selects the most important, although the final graph is unreadable. This is the complete code:
import numpy as np
import pandas as pd
df = pd.read_csv('ricerice.csv')
array=df.values
X = array[:,0:7803]
Y = array[:,7804]
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
seed=0
test_size=0.30
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=test_size, random_state=seed)
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X, Y)
import matplotlib.pyplot as plt
from matplotlib import pyplot
from xgboost import plot_importance
fig1=plt.gcf()
plot_importance(model)
plt.draw()
fig1.savefig('xgboost.png', figsize=(50, 40), dpi=1000)
Although the size of the figure, the graph is illegible.
Upvotes: 22
Views: 35172
Reputation: 1096
You need to sort your feature importances in descending order first:
sorted_idx = trained_mdl.feature_importances_.argsort()[::-1]
Then just plot them with the column names from your dataframe
from matplotlib import pyplot as plt
n_top_features = 10
sorted_idx = trained_mdl.feature_importances_.argsort()[::-1]
plt.barh(X_test.columns[sorted_idx][:n_top_features ], trained_mdl.feature_importances_[sorted_idx][:n_top_features ])
Upvotes: 4
Reputation: 5859
You can obtain feature importance from Xgboost model with feature_importances_
attribute. In your case, it will be:
model.feature_imortances_
This attribute is the array with gain
importance for each feature. Then you can plot it:
from matplotlib import pyplot as plt
plt.barh(feature_names, model.feature_importances_)
(feature_names
is a list with features names)
You can sort the array and select the number of features you want (for example, 10):
sorted_idx = model.feature_importances_.argsort()
plt.barh(feature_names[sorted_idx][:10], model.feature_importances_[sorted_idx][:10])
plt.xlabel("Xgboost Feature Importance")
There are two more methods to get feature importance:
permutation_importance
from scikit-learn
(from version 0.22)You can read more in this blog post of mine.
Upvotes: 4
Reputation: 23129
There are couple of points:
X_train, y_train
), not the entire dataset (X, y
).max_num_features
parameter of the plot_importance()
function to display only top max_num_features
features (e.g. top 10).With the above modifications to your code, with some randomly generated data the code and output are as below:
import numpy as np
# generate some random data for demonstration purpose, use your original dataset here
X = np.random.rand(1000,100) # 1000 x 100 data
y = np.random.rand(1000).round() # 0, 1 labels
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
seed=0
test_size=0.30
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=test_size, random_state=seed)
from xgboost import XGBClassifier
model = XGBClassifier()
model.fit(X_train, y_train)
import matplotlib.pylab as plt
from matplotlib import pyplot
from xgboost import plot_importance
plot_importance(model, max_num_features=10) # top 10 most important features
plt.show()
Upvotes: 26