Reputation: 65
Why are the errors not changing significantly in my cross validated ridge regression model even if I have tried a long list of alphas from 0.01 to 25?
from sklearn.linear_model import Ridge
from sklearn.model_selection import GridSearchCV
params={'alpha': [25,10,4,2,1.0,0.8,0.5,0.3,0.2,0.1,0.05,0.02,0.01]}
rdg_reg = Ridge()
clf = GridSearchCV(rdg_reg,params,cv=2,verbose = 1, scoring = 'neg_mean_squared_error')
clf.fit(x_dummied_poly,y)
clf.best_params_
#{'alpha': 4}
pd.DataFrame(clf.cv_results_)
Upvotes: 0
Views: 5808
Reputation:
You either have to supply the data to us, so we can perform our own set of feature selections and dimensionality reductions (which I doubt anyone will do for you, since that's a very tedious and time consuming process, this is like some machine learning project that you do in training that you get paid to do)
or
Just rest with the assumption that there is 'No free lunch' in the field of machine learning. The meaning to that quote is that; there is 'BEST' model that gives you what you're looking for.
That can kind of be extended to parameter tuning in a different sense. There is no hard and fast rule 'alpha' is the best parameter; that changing the alpha values MUST reflect in a significant change in Mean Squared Error.
or
Try asking this question in CrossValidated StackExchange.
Upvotes: 2