artemis
artemis

Reputation: 7281

LIME feature explaining produces invalid key error

I have an MLPRegressor that works really well with my dataset. Here is a trimmed version of my code cutting out some unnecessary things:

from sklearn.neural_network import MLPRegressor
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, StandardScaler, RobustScaler
from sklearn import preprocessing
import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.tree import export_graphviz
from datetime import datetime

def str_to_num(arr):
    le = preprocessing.LabelEncoder()
    new_arr = le.fit_transform(arr)
    return new_arr

def compare_values(arr1, arr2):
    thediff = 0
    thediffs = []
    for thing1, thing2 in zip(arr1, arr2):
        thediff = abs(thing1 - thing2)
        thediffs.append(thediff)

    return thediffs

def minmaxscale(data):
    scaler = MinMaxScaler()
    df_scaled = pd.DataFrame(scaler.fit_transform(data), columns=data.columns)
    return df_scaled

data = pd.read_csv('reg.csv')
label = data['TOTAL']
data = data.drop('TOTAL', axis=1)
data = minmaxscale(data)

mlp = MLPRegressor(
    activation = 'tanh',
    alpha = 0.005,
    learning_rate = 'invscaling',
    learning_rate_init = 0.01,
    max_iter = 200,
    momentum = 0.9,
    solver = 'lbfgs',
    warm_start = True
)

X_train, X_test, y_train, y_test = train_test_split(data, label, test_size = 0.2)
mlp.fit(X_train, y_train)
preds = mlp.predict(X_test)
score = compare_values(y_test, preds)
print("Score: ", np.average(score))

And it works great! Producing: Score: 7.246851606714535

However, I want to see the feature importances in this model. I understand that is not always the point of neural network, but this is a business justification, so it is necessary. I discover LIME via the LIME Paper and I would like to use it. Since this is regression, I try to follow example here

So I added the following lines:

categorical_features = np.argwhere(np.array([len(set(data[:,x])) for x in range(data.shape[1])]) <= 10).flatten()

explainer = lime.lime_tabular.LimeTabularExplainer(
    X_train, 
    feature_names=X_train.columns, 
    class_names=['TOTAL'], 
    verbose=True,
    categorical_features = categorical_features, 
    mode='regression')

But now get the error:

Traceback (most recent call last):
  File "c:\Users\jerry\Desktop\mlp2.py", line 65, in <module>
    categorical_features = np.argwhere(np.array([len(set(data[:,x])) for x in range(data.shape[1])]) <= 10).flatten()
  File "c:\Users\J39304\Desktop\mlp2.py", line 65, in <listcomp>
    categorical_features = np.argwhere(np.array([len(set(data[:,x])) for x in range(data.shape[1])]) <= 10).flatten()
  File "C:\Python35-32\lib\site-packages\pandas\core\frame.py", line 2927, in __getitem__
    indexer = self.columns.get_loc(key)
  File "C:\Python35-32\lib\site-packages\pandas\core\indexes\base.py", line 2657, in get_loc
    return self._engine.get_loc(key)
  File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
  File "pandas\_libs\index.pyx", line 110, in pandas._libs.index.IndexEngine.get_loc
TypeError: '(slice(None, None, None), 0)' is an invalid key

Why am I getting this error and what can I do? I do not understand how to properly integrate LIME.

I see others have had this issue, but I don't know how to fix

Upvotes: 2

Views: 1896

Answers (1)

artemis
artemis

Reputation: 7281

I needed to first convert everything to a numpy array:

class_names = X_train.columns
X_train = X_train.to_numpy()
X_test = X_test.to_numpy()
y_train = y_train.to_numpy()
y_test = y_test.to_numpy()

Then from there, feed that to the explainer:

explainer = lime.lime_tabular.LimeTabularExplainer(
    X_train, 
    feature_names=class_names, 
    class_names=['TOTAL'], 
    verbose=True, 
    mode='regression')

exp = explainer.explain_instance(X_test[5], mlp.predict)
exp = exp.as_list()
print(exp)

Upvotes: 7

Related Questions