Joachim Rives
Joachim Rives

Reputation: 553

Using Scikit-Learn's RandomizedSearchCV module, how do you guarantee a certain set of hyper-parameter settings will all be tested?

Using Scikit-Learn's RandomizedSearchCV module, how do you guarantee a certain set of hyper-parameter settings will all be tested?

My goal is to guarantee that, using a randomized search for optimal estimator hyper-parameters, all available activation functions for sklearn MLPClassifier are tested. Still, I would also like an answer that applies to Python machine learning models/estimators in general. I believe I could test which activation function was the best by running RandomizedSearchCV or GridSearchCV using 3 different instances of MLPClassifier. The problem is, what if I want to test all available activation functions and all weight "solvers" among other parameters such as the number of neurons and layers? Is there any way to do this using a Python library?

Upvotes: 0

Views: 127

Answers (1)

Joachim Rives
Joachim Rives

Reputation: 553

I am only posting this answer since the comment below the question is the best answer for me.

Comment by desertnaut on Aug. 08, 2020:

You cannot; RandomizedSearchCV provides absolutely no such guarantees. You should revert to GridSearchCV if you want to be sure that certain combinations will be tested.

The best solution for now is to use a combination of both RandomizedSearchCV and GridSearchCV.

Upvotes: 0

Related Questions