Pegah
Pegah

Reputation: 121

How to use weak learners in Adaboost?

I'm using Adaboost and here is a question about weak learners. In the Adaboost algorithm, as follows, in step (2), can I use different algorithms? For example, when k is 1, I use KNN, if k=2, SVM is used and for k=3, I use decision tree? Or, should I use a single algorithm in all k iteration of the for loop?

(1) initialize the weight of each tuple in D to 1=d;
(2) for i = 1 to k do // for each round:
(3) sample D with replacement according to the tuple weights to obtain Di ;
(4) use training set Di to derive a model, Mi ;
(5) compute error.Mi/, the error rate of Mi (Eq. 8.34)
(6) if error.Mi/ > 0.5 then
(7) go back to step 3 and try again;
(8) endif
(9) for each tuple in Di that was correctly classified do
(10) multiply the weight of the tuple by error.Mi/=.1􀀀error.Mi//; // update    weights
(11) normalize the weight of each tuple;
(12) endfor

Upvotes: 1

Views: 1094

Answers (1)

ostrokach
ostrokach

Reputation: 19912

Adaboost is usually used with week learners, like short decision trees. You can use more complicated learners but in that case Adaboost might not be the best choice to combine results.

Most implementations (like the Scikit learn AdaBoostClassifier) assume that you will be using the same learner for each step, but it shouldn't be too difficult to change this.

Also, this question might be better suited for https://stats.stackexchange.com/.

Upvotes: 2

Related Questions