Tom Kealy
Tom Kealy

Reputation: 2669

Optimising a meta estimator

I'm trying to use the GridSearchCV functions of scikit-learn to find the best parameters of some base models, which I then feed into a stacking estimator.

My code is based on this post (which I'm using to illustrate): https://stats.stackexchange.com/questions/139042/ensemble-of-different-kinds-of-regressors-using-scikit-learn-or-any-other-pytho/274147

I'd like to perform a grid search over the parameters of my estimators (mostly the ridge parameter, the number of neighbours in KNN, and the RF depth and spilt), but I can't get it working. I define the model, below:

from sklearn.base import TransformerMixin
from sklearn.datasets import make_regression
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import LinearRegression, Ridge

class RidgeTransformer(Ridge, TransformerMixin):

    def transform(self, X, *_):
        return self.predict(X)


class RandomForestTransformer(RandomForestRegressor, TransformerMixin):

    def transform(self, X, *_):
        return self.predict(X)


class KNeighborsTransformer(KNeighborsRegressor, TransformerMixin):

    def transform(self, X, *_):
        return self.predict(X)


def build_model():
    ridge_transformer = Pipeline(steps=[
        ('scaler', StandardScaler()),
        ('poly_feats', PolynomialFeatures()),
        ('ridge', RidgeTransformer())
    ])

    pred_union = FeatureUnion(
        transformer_list=[
            ('ridge', ridge_transformer),
            ('rand_forest', RandomForestTransformer()),
            ('knn', KNeighborsTransformer())
        ],
        n_jobs=2
    )

    model = Pipeline(steps=[
         ('pred_union', pred_union),
         ('lin_regr', LinearRegression())
    ])

return model

Now, I'd like to run CV on the parameters of the forest. I can get the parameters with:

print(model.get_params().keys())

But when I run the code below, I still get an error:

pipe = Pipeline(steps=[('reg', model)])

parameters = {'pred_union__rand_forest__n_estimators':[20, 50, 100, 200]}

g_search = GridSearchCV(pipe, parameters)

X, y = make_regression(n_features=10, n_targets=2)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

g_search.fit(X_train, y_train)

Invalid parameter pred_union for estimator Pipeline(memory=None,
 steps=[('reg', Pipeline(memory=None,
 steps=[('pred_union', FeatureUnion(n_jobs=2,
   transformer_list=[('ridge', Pipeline(memory=None,
 steps=[('scaler', StandardScaler(copy=True, with_mean=True, with_std=True)), ('poly_feats', PolynomialFeatures(degree=2, include_bias=True, interaction_only=False)), ('ridge', RidgeTransformer(...=None)), ('lin_regr', LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False))]))]). Check the list of available parameters with `estimator.get_params().keys()`.

What am I doing wrong?

Upvotes: 0

Views: 357

Answers (1)

Vivek Kumar
Vivek Kumar

Reputation: 36609

Your model is actually already a pipeline, so why are you wrapping it again in a pipeline? No need for pipe = Pipeline(steps=[('reg', model)]). Just use model inside the grid-search.

But if you want to wrap it inside a pipeline and then work, then you need to update the parameters by appending the 'reg' to each name.

parameters = {'reg__pred_union__rand_forest__n_estimators':[20, 50, 100, 200]}

Upvotes: 1

Related Questions