SDS
SDS

Reputation: 331

ValueError: bad input shape in sklearn Python

I have 2 List features and labels. features contains Diseases, Age, Gender , PIN. labels contains Health-Plan.

User passes the user_input, which is in the format of features. So, the code should Predict the Health-Plan for the user using DecisionTree of sklearn API.

As few parameters in features are Strings. Eg Diseases and Gender. I am encoding them using LabelEncoder to avoid error 'ValueError: could not convert string to float' .

Now, after using Label Encoder, I got the following exception 'ValueError: bad input shape'

How can I fix the issue and again reverse the encoding done to avoid String to Float error. Please help.

from sklearn import tree
from sklearn.preprocessing import LabelEncoder
features = [['TB' , 28, 'MALE', 121001], ['TB' , 28, 'FEMALE', 121002], ['CANCER' , 28, 'MALE', 121001], ['CANCER' , 28, 'FEMALE', 121001]]
labels = ['X125434', 'X125436','X125437' , 'X125437']
user_input = ['TB' , 28, 'MALE', 121001]

le = LabelEncoder()

Y = le.fit_transform(features)
X = le.fit_transform(labels)
new_user_input = le.fit_transform(user_input)

clf = tree.DecisionTreeClassifier()
clf = clf.fit(new_features, new_labels)

print(clf.predict([new_ui]))

Upvotes: 1

Views: 17452

Answers (2)

ram nithin
ram nithin

Reputation: 119

It is not recommended to use the same label encoder for all the features in the data set. It is safe to create a label encoder for each column because each feature varies in terms of the values.

from sklearn import tree
from sklearn.preprocessing import LabelEncoder
import pandas as pd

features = [['TB' , 28, 'MALE', 121001], ['TB' , 28, 'FEMALE', 121002], ['CANCER' , 28, 'MALE', 121001], ['CANCER' , 28, 'FEMALE', 121001]]
labels = ['X125434', 'X125436','X125437' , 'X125437']
feature_names=['Disease','Age','Gender','PIN']

user_input = ['TB' , 28, 'MALE', 121001]


train=pd.DataFrame(data=features,columns=['Disease','Age','Gender','PIN'])
train['Labels']=labels

test=pd.DataFrame(columns=['Disease','Age','Gender','PIN'])
test.loc[len(test)]=user_input

le_disease = LabelEncoder()
le_gender = LabelEncoder()
le_labels = LabelEncoder()

train['Disease'] = le_disease.fit_transform(train['Disease'])
train['Gender'] = le_gender.fit_transform(train['Gender'])
train['Labels'] = le_labels.fit_transform(train['Labels'])


test['Disease'] = le_disease.transform(test['Disease'])
test['Gender'] = le_gender.transform(test['Gender'])


clf = tree.DecisionTreeClassifier()
clf = clf.fit(train[feature_names], train['Labels'])

print(le_labels.inverse_transform(clf.predict(test[feature_names])))

LabelEncoder.inverse_transform() can be used to get the original data back.

Upvotes: 7

Anatoly Vasilyev
Anatoly Vasilyev

Reputation: 411

According to LabelEncoder documentation, it appears you're using it in a wrong way, so the exception your are getting is saying exactly the right thing.

In your case, I think you want to encode Diseases, Gender and Health-Plan as integers: for instance, TB and CANCER will become 0 and 1, MALE and FEMALE will become 0 and 1 as well; X125434, X125436, X125437 will be encoded as 0, 1, 2.

Example:

from sklearn import tree
from sklearn.preprocessing import LabelEncoder

features = [
    ['TB' , 28, 'MALE', 121001],
    ['TB' , 28, 'FEMALE', 121002],
    ['CANCER' , 28, 'MALE', 121001],
    ['CANCER' , 28, 'FEMALE', 121001]]
labels = ['X125434', 'X125436','X125437' , 'X125437']
user_input = ['TB' , 28, 'MALE', 121001]

# use different encoders for different data
le = LabelEncoder()
le_diseases = LabelEncoder()
le_gender = LabelEncoder()

diseases = [features_list[0] for features_list in features]
gender = [features_list[2] for features_list in features]

features_preprocessed = []
diseases_labels = le_diseases.fit_transform(diseases)
gender_labels = le_gender.fit_transform(gender)
for i, features_list in enumerate(features):
    features_preprocessed.append([
        diseases_labels[i],
        features[i][1],
        gender_labels[i],
        features[i][3]])

labels_preprocessed = le.fit_transform(labels)

# ... then use features_preprocessed, labels_preprocessed and the label encoders above

P.S. I suggest you to use pandas data frames instead of lists: as you see from the example above, it doesn't really look clean working with lists in such cases. Your features would look like:

import pandas as pd
features_df = pd.DataFrame({
    'Diseases': ['TB' , 'TB', 'CANCER', 'CANCER'],
    'Age': [28, 28, 28, 28],
    'Gender': ['MALE', 'FEMALE', 'MALE', 'FEMALE'],
    'PIN': [121001, 121002, 121001, 121001]
})

Upvotes: 2

Related Questions