Reputation: 11
I am new to ensemble learning and methods and I have built the following model using sklearn
std = RobustScaler()
std.fit(train_feats)
train_feats = std.transform(train_feats)
val_feats = std.transform(val_feats)
# define base learners
# XGB classifier
xgb_classifier = xgb.XGBClassifier(objective='binary:logistic',
learning_rate=0.1,
n_estimators=10,
max_depth=1,
subsample=0.4,
random_state=234)
# SVM
svm_classifier = SVC(gamma=0.1,
C=0.1,
kernel='poly',
degree=3,
coef0=10.0,
probability=True)
# random forest classifier
rf_classifier = RandomForestClassifier(n_estimators=10,
max_features="sqrt",
criterion='entropy',
class_weight='balanced')
# Define meta-learn
voting_clf = VotingClassifier([("xgb", xgb_classifier),
("svm", svm_classifier),
("rf", rf_classifier)],
voting="soft",
flatten_transform=True)
voting_clf.fit(train_feats, train_labels)
The model has been running for 5 hours. The shape of train feats is: (18000, 29). Is it normal for the voting classifier to be running for 5 hours with no sign of ever stopping? Is there a bug here? I don't want to stop the training and re-run unless I know things are wrong and there is a bug.
I was curious if there is a bug that is slowing the training time or is it generally like this, that takes long time to be trained?
Upvotes: 0
Views: 727