Reputation: 2365
For example the VotingClassifier expects a list of estimators, but in my case the different estimators already produced results (in the form of probabilities for each possible label e.g. [0.8, 0.2, 0.0, 0.0]
) for the traing dataset as well as the result dataset. Is there a way to use this instead of the actual classifiers?
Upvotes: -1
Views: 97
Reputation: 5119
If you already have the probabilities calculated, than you can use a simple equivalent numpy code. Note that you would need to generalise the example to the case with many predictions :)
import numpy as np
class_1 = [0.5, 0.4, 0.1, 0.0]
class_2 = [0.0, 0.4, 0.6, 0.0]
class_3 = [0.5, 0.4, 0.05, 0.05]
class_combined = np.array([class_1, class_2, class_3])
class_combined
# VotingClassifier(voting='hard')
hard_voting = np_matrix.argmax(axis=1)
hard = np.bincount(voting).argmax()
0
# VotingClassifier(voting='soft')
soft_sum = class_combined.sum(axis=0)
soft = soft_sum.argmax()
1
Upvotes: 2