Avi
Avi

Reputation: 2283

Decision Tree: Probability of prediction inversely proportional in python

I would like to create probability of prediction inversely proportional to each class in my decision tree. Something like what is described here in page 9 formula in 4.1. How can I do it referring to my code:

import numpy as np
import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn import tree
url="https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data"
c=pd.read_csv(url, header=None)
X = c.values[:,1:8]
Y = c.values[:,0]
X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100)
clf_entropy = DecisionTreeClassifier(criterion = "entropy", random_state = 100,
 max_depth=3, min_samples_leaf=5)
clf_entropy.fit(X_train, y_train)
probs = clf_entropy.predict_proba(X_test)
probs

The target is to replace zero probabilities with a small non-zero value and normalize the probabilities to make it a distribution. Labels are then selected, such that the probability of selection is inversely proportional to the current tree's predictions. enter image description here

Upvotes: 1

Views: 464

Answers (1)

Venkatachalam
Venkatachalam

Reputation: 16966

The mentioned equation can be implemented with the following snippet.

def inverse_prob(model_probs):
    model_probs[model_probs == 0 ] = 1e-5
    inverse = 1/model_probs
    return inverse/inverse.sum(axis=0)

Added a small value 1e-5, whenever the given probability distribution has zero values in it.

Upvotes: 1

Related Questions