Reputation: 21
I am working on a binary classification with 50 features, I am using tf.estimator.DNNClassifier. How can I rank the top features that are influencing the outcome?
model = tf.estimator.DNNClassifier(feature_columns=feat_cols, hidden_units=[1024, 512, 256])
model.train(input_fn=input_func,steps=5000)
Tried to use the following:
wt_names = model.get_variable_names()
wt_vals = [model.get_variable_value(name) for name in wt_names]
wt_names:
['dnn/hiddenlayer_0/bias',
'dnn/hiddenlayer_0/bias/t_0/Adagrad',
'dnn/hiddenlayer_0/kernel',
'dnn/hiddenlayer_0/kernel/t_0/Adagrad',
'dnn/hiddenlayer_1/bias',
....
wt_values:
model.get_variable_value('dnn/hiddenlayer_0/kernel')
array([[-0.05203109, -0.08008841, -0.07939883, ..., 0.00460025,
-0.08133098, -0.00713339],
[ 0.06286905, 0.01680468, 0.13167404, ..., -0.06170678,
-0.06767021, 0.05019882],
[ 0.07433462, -0.01052287, -0.10441218, ..., -0.081627 ,
-0.06397511, -0.03532334],
...,
Not sure how to figure out which features are ranked higher.
Upvotes: 0
Views: 487
Reputation: 21
Check out the the following Feature Selection Link: https://machinelearningmastery.com/feature-selection-machine-learning-python/
Give you 4 ways to figure this out: Univariate Selection, Recursive Feature Elimination, Principal Component Analysis and Feature Importance.
Upvotes: 1
Reputation: 21
Little bit of research tells me that you cannot determine which Feature is influencing the most for a DNN. You need to do this before you get to DNN.
You can use Scikit to reduce the features: [http://scikit-learn.org/stable/modules/feature_selection.html]
You can use "Removing features with low variance", "Recursive feature elimination" and others to reduce Features
Upvotes: 0