Reputation: 817
Hi I recently taking course and do some survey on Adaboost
I view some code using Adaboost to boost the performance of neural network
As far as I Know with multiple classes Adaboost can be done by:
(1)Weighting the training data as 1 for each data.
(2)After training we re-weight the data by adding the weight if the
classifier do it wrong,else reduce the weight if classifier predict it correctly.
(3)And final we take the combination of all classifiers we and take the max one (probability)
I could make some code about it with Keras and sklearn:
model = Model( img_input , o )
model.fit_generator(#some parameters)
from sklearn.ensemble import AdaBoostClassifier
adaboost = AdaBoostClassifier(base_estimator=model,algorithm='SAMME')
adaboost.fit_generator(#some parameters)
My question is:
I would like to know how Adaboost is used with neural network
I could imagine two ways to do this not sure how Adaboost do here:
(1)After complete training(1 hour),we re-weight the training data and then again and again until iteration is over.
(2)If first round of all data have been fed into neural network and then we re-weight the training data.
The difference between (1) and (2) is how we define one iteration in Adaboost:
(1) would take too long to complete whole iteration
(2) just some how don't make sense to me cause I don't think the whole process is going to convergence so fast or the iteration number would need to be set large.
Upvotes: 0
Views: 993
Reputation: 817
It seems that only few people go this way.
I think I would choose the "stack" method .
Upvotes: 1