Nir Kigelman
Nir Kigelman

Reputation: 131

How to track categorical indices for catboost with sklearn pipeline

I want to track categorical features indices within sklearn pipeline, in order to supply them to CatBoostClassifier.

I am starting with a set of categorical features before the fit() of the pipeline. The pipeline itself changing the structure of the data and removing features in the feature selection step.

How can I know upfront which categorical features will be removed or added in the pipeline? I need to know the updated list indices when I call the fit() method. The problem is, my dataset may change after the transformations.

Here is an example of my dataframe:

data = pd.DataFrame({'pet':      ['cat', 'dog', 'dog', 'fish', np.nan, 'dog', 'cat', 'fish'],
                     'children': [4., 6, 3, np.nan, 2, 3, 5, 4],
                     'salary':   [90., 24, np.nan, 27, 32, 59, 36, 27],
                     'gender':   ['male', 'male', 'male', 'male', 'male', 'male', 'male', 'male'],
                     'happy':    [0, 1, 1, 0, 1, 1, 0, 0]})

categorical_features = ['pet', 'gender']
numerical_features = ['children', 'salary']
target = 'happy'

print(data)

     pet    children    salary  gender  happy
0    cat    4.0         90.0    male    0
1    dog    6.0         24.0    male    1
2    dog    3.0         NaN     male    1
3    fish   NaN         27.0    male    0
4    NaN    2.0         32.0    male    1
5    dog    3.0         59.0    male    1
6    cat    5.0         36.0    male    0
7    fish   4.0         27.0    male    0

Now I want to run a pipeline with multiple steps. One of these steps is VarianceThreshold(), which in my case, will cause "gender" to be removed from the dataframe.

X, y = data.drop(columns=[target]), data[target]

pipeline = Pipeline(steps=[
    (
        'preprocessing',
        ColumnTransformer(transformers=[
            (
                'categoricals',
                Pipeline(steps=[
                    ('fillna_with_frequent', SimpleImputer(strategy='most_frequent')),
                    ('ordinal_encoder', OrdinalEncoder())
                ]),
                categorical_features
            ),
            (
                'numericals',
                Pipeline(steps=[
                    ('fillna_with_mean', SimpleImputer(strategy='mean'))
                ]),
                numerical_features
            )
        ])
    ),
    (
        'feature_selection',
        VarianceThreshold()
    ),
    (
        'estimator',
        CatBoostClassifier()
    )
])

Now when I am trying to get the list of categorical features indices for CatBoost, I cannot tell that "gender" is no longer a part of my dataframe.

cat_features = [data.columns.get_loc(col) for col in categorical_features]
print(cat_features)
[0, 3]

The indices 0, 3 are wrong because after VarianceThreshold, feature 3 (gender) will be removed.

pipeline.fit(X, y, estimator__cat_features=cat_features)
---------------------------------------------------------------------------
CatBoostError                             Traceback (most recent call last)
<ipython-input-230-527766a70b4d> in <module>
----> 1 pipeline.fit(X, y, estimator__cat_features=cat_features)

~/anaconda3/lib/python3.7/site-packages/sklearn/pipeline.py in fit(self, X, y, **fit_params)
    265         Xt, fit_params = self._fit(X, y, **fit_params)
    266         if self._final_estimator is not None:
--> 267             self._final_estimator.fit(Xt, y, **fit_params)
    268         return self
    269 

~/anaconda3/lib/python3.7/site-packages/catboost/core.py in fit(self, X, y, cat_features, sample_weight, baseline, use_best_model, eval_set, verbose, logging_level, plot, column_description, verbose_eval, metric_period, silent, early_stopping_rounds, save_snapshot, snapshot_file, snapshot_interval, init_model)
   2801         self._fit(X, y, cat_features, None, sample_weight, None, None, None, None, baseline, use_best_model,
   2802                   eval_set, verbose, logging_level, plot, column_description, verbose_eval, metric_period,
-> 2803                   silent, early_stopping_rounds, save_snapshot, snapshot_file, snapshot_interval, init_model)
   2804         return self
   2805 

~/anaconda3/lib/python3.7/site-packages/catboost/core.py in _fit(self, X, y, cat_features, pairs, sample_weight, group_id, group_weight, subgroup_id, pairs_weight, baseline, use_best_model, eval_set, verbose, logging_level, plot, column_description, verbose_eval, metric_period, silent, early_stopping_rounds, save_snapshot, snapshot_file, snapshot_interval, init_model)
   1231         _check_train_params(params)
   1232 
-> 1233         train_pool = _build_train_pool(X, y, cat_features, pairs, sample_weight, group_id, group_weight, subgroup_id, pairs_weight, baseline, column_description)
   1234         if train_pool.is_empty_:
   1235             raise CatBoostError("X is empty.")

~/anaconda3/lib/python3.7/site-packages/catboost/core.py in _build_train_pool(X, y, cat_features, pairs, sample_weight, group_id, group_weight, subgroup_id, pairs_weight, baseline, column_description)
    689             raise CatBoostError("y has not initialized in fit(): X is not catboost.Pool object, y must be not None in fit().")
    690         train_pool = Pool(X, y, cat_features=cat_features, pairs=pairs, weight=sample_weight, group_id=group_id,
--> 691                           group_weight=group_weight, subgroup_id=subgroup_id, pairs_weight=pairs_weight, baseline=baseline)
    692     return train_pool
    693 

~/anaconda3/lib/python3.7/site-packages/catboost/core.py in __init__(self, data, label, cat_features, column_description, pairs, delimiter, has_header, weight, group_id, group_weight, subgroup_id, pairs_weight, baseline, feature_names, thread_count)
    318                         )
    319 
--> 320                 self._init(data, label, cat_features, pairs, weight, group_id, group_weight, subgroup_id, pairs_weight, baseline, feature_names)
    321         super(Pool, self).__init__()
    322 

~/anaconda3/lib/python3.7/site-packages/catboost/core.py in _init(self, data, label, cat_features, pairs, weight, group_id, group_weight, subgroup_id, pairs_weight, baseline, feature_names)
    638             cat_features = _get_cat_features_indices(cat_features, feature_names)
    639             self._check_cf_type(cat_features)
--> 640             self._check_cf_value(cat_features, features_count)
    641         if pairs is not None:
    642             self._check_pairs_type(pairs)

~/anaconda3/lib/python3.7/site-packages/catboost/core.py in _check_cf_value(self, cat_features, features_count)
    360                 raise CatBoostError("Invalid cat_features[{}] = {} value type={}: must be int().".format(indx, feature, type(feature)))
    361             if feature >= features_count:
--> 362                 raise CatBoostError("Invalid cat_features[{}] = {} value: must be < {}.".format(indx, feature, features_count))
    363 
    364     def _check_pairs_type(self, pairs):

CatBoostError: Invalid cat_features[1] = 3 value: must be < 3.

I expect the cat_features to be [0], but the actual output is [0, 3].

Upvotes: 9

Views: 5799

Answers (4)

Sammie
Sammie

Reputation: 63

The reason you are getting an error is that your current cat_features are derived from your non_transformed dataset. In order to fix this, you have to derive your cat_features after your dataset has been transformed. This is how I tracked mine: I fit the transformer to the dataset, retrieved and transformed the dataset to a pandas data frame, and then retrieved the categorical indices

column_transform = ColumnTransformer([('n', MinMaxScaler(), numerical_idx)], remainder='passthrough')
scaled_X = column_transform.fit_transform(X)
new_df = pd.DataFrame(scaled_X)
new_df = new_df.infer_objects() # converts the datatype to their most accurate datatype
cat_features_new = [new_df.columns.get_loc(col) for col in new_df.select_dtypes(include=['object', 'bool']).columns]

Upvotes: 4

Asgeir Berland
Asgeir Berland

Reputation: 11

The underlying problem here is that transformers do not follow a predefined output schema, implying you could transform 1 column into 3 (categorical columns).

As such, you need to keep track of the number of features you're generating yourself.

My solution to this was to organize the Pipeline in such a way that I knew in advance which indexes corresponded to the categorical columns for the last step (the Catboost estimator). Typically, I'd isolate and wrap all the categorical-related operations within a single transformer (you could do sub-transformations within this too), and I'd keep track of how many columns it would output. Crucially; set this transformer as the first transformer in your pipeline. This will guarantee my first X indexes to be categorical, and I can pass this list of indexes to your catboost cat_features parameter at the end.

Upvotes: 0

prasad
prasad

Reputation: 31

The issue is not with catboost but it's how your ColumnTransformer works. The columnTransfomer reconstructs the input df post-transformation in the order your transform operation

Upvotes: 0

Anna Veronika Dorogush
Anna Veronika Dorogush

Reputation: 1223

You can try passing cat_features to CatBoostClassifier init function.

Upvotes: -1

Related Questions