Reputation: 2156
I'm running a bunch of models with scikit-learn to solve a classification problem.
How do I iterate through different scikit-learn models?
from sklearn.ensemble import AdaBoostClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.dummy import DummyClassifier
classifiers_name = ['AdaBoostClassifier',
'BernoulliNB',
'DummyClassifier']
def fitting_classifier(clf, X_train, y_train):
return clf.fit(X_train, y_train)
for clf_n in classifiers_name:
locals()['results_' + clf_n] = fitting_classifier(locals()[clf_n + str(())], X_train, y_train)
I seem to be getting an error in this part of the code: fitting_classifier(locals()[clf_n + str(())], X_train, y_train)
. The error shown is:
<ipython-input-31-cccf30ff4392> in summary_scores(file_path, image_format, scores)
140 for clf_sn in classifiers_name:
--> 141 locals()['results_' + clf_n] = fitting_classifier(locals()[clf_n + str(())], X_train, y_train)
142
143 # results_AdaBoostClassifier = fitting_classifier(AdaBoostClassifier(), X_train, y_train)
KeyError: 'AdaBoostClassifier()'
Any help with this would really be appreciated. Thank you.
Upvotes: 0
Views: 1482
Reputation: 29
Since you have not mentioned the purpose of this. Why exactly you want to iterate through different scikit-learn models?
If you are trying to find out which model of the above fits better and outperforms, you can use something like this
# -------- Cross validate model with Kfold stratified cross val ---------------
kfold = StratifiedKFold(n_splits=10)
# Modeling step Test differents algorithms
classifiers = ['AdaBoostClassifier',
'BernoulliNB',
'DummyClassifier']
results = []
for model in classifiers :
results.append(cross_val_score(model, X_train, y = y_train, scoring = "accuracy", cv = kfold, n_jobs=4))
cv_means = []
cv_std = []
for cv_result in results:
cv_means.append(cv_result.mean())
cv_std.append(cv_result.std())
cv_res = pd.DataFrame({"CrossValMeans":cv_means,"CrossValerrors": cv_std,"Algorithm":["AdaBoostClassifier","BernoulliNB","DummyClassifier"]})`
If you are trying to Ensemble these
Train them separately, and use HyperParams to find the best estimator for a model and then use VotingClassifier as:
DTC = DecisionTreeClassifier()
ADB = AdaBoostClassifier(DTC)
ada_param_grid = { # Params here }
gsABC = GridSearchCV(ADB,param_grid = ada_param_grid , cv=kfold, scoring="accuracy", n_jobs= 4, verbose = 1)
AdaBoost_best =gsABC.best_estimator_
# Likewise you can do for others and then perform Voting
votingC = VotingClassifier(estimators=[('ada', AdaBoost_best), ('nb', BernoulliNB_best),
('dc', DummyClassifier_best)], voting='soft', n_jobs=4)
votingC = votingC.fit(X_train, Y_train)
Upvotes: 1