Reputation: 41
I am attempting to apply knn, logistic regression, decision tree, and random forest to predict a binary response variable.
The former three produce seemingly reasonable accuracy rates, but running the random forest algorithm produces an accuracy rate of over 99% (1127/1128 correct).
vote_lst = list(range(1, 101))
rf_cv_scores = []
for tree_count in vote_lst:
maple = RandomForestClassifier(n_estimators = tree_count, random_state = 1618)
scores = cross_val_score(maple, x, y, cv = 10, scoring = 'accuracy') # 10-fold CV
rf_cv_scores.append(scores.mean())
# find minimum error's index (i.e. optimal num. of estimators)
rf_MSE = [1 - x for x in rf_cv_scores]
min_error = rf_MSE[0]
for i in range(len(rf_MSE)):
min_error = min_error
if rf_MSE[i] < min_error:
rf_min_index = i
min_error = rf_MSE[i]
print(rf_min_index + 1) # error minimized w/ 66 estimators
I tuned the rf algorithm hyperparameter n_estimators
using the code above. Then, I fit the model on my data:
# fit random forest classifier
forest_classifier = RandomForestClassifier(n_estimators = rf_min_index + 1, random_state = 1618)
forest_classifier.fit(x, y)
# predict test set
y_pred_forest = forest_classifier.predict(x)
I'm concerned that some drastic overfitting occurred here: any ideas?
Upvotes: 0
Views: 1706
Reputation: 1009
I'm concerned that some drastic overfitting occurred here: any ideas?
You're making predictions on the same dataset you've trained above:
y_pred_forest = forest_classifier.predict(x)
Upvotes: 0