Startiflette
Startiflette

Reputation: 111

Can we save the result of the Hyperopt Trials with Sparktrials

I am currently trying to optimize the hyperparameters of a gradient boosting method with the library hyperopt. When I was working on my own computer, I used the class Trials and I was able to save and reload my results with the library pickles. This allowed me to have a save of all the set of parameters I tested. My code looked like that :

from hyperopt import SparkTrials, STATUS_OK, tpe, fmin
from LearningUtils.LearningUtils import build_train_test, get_train_test, mean_error, rmse, mae
from LearningUtils.constants import MAX_EVALS, CV, XGBOOST_OPTIM_SPACE, PARALELISM
from sklearn.model_selection import cross_val_score
import pickle as pkl

if os.path.isdir(PATH_TO_TRIALS): #we reload the past results
    with open(PATH_TO_TRIALS, 'rb') as trials_file:
        trials = pkl.load(trials_file)
else : # We create the trials file
    trials = Trials()
    
# classic hyperparameters optimization  
def objective(space):
    regressor = xgb.XGBRegressor(n_estimators = space['n_estimators'],
                            max_depth = int(space['max_depth']),
                            learning_rate = space['learning_rate'],
                            gamma = space['gamma'],
                            min_child_weight = space['min_child_weight'],
                            subsample = space['subsample'],
                            colsample_bytree = space['colsample_bytree'],
                            verbosity=0
                            )
    regressor.fit(X_train, Y_train)
    # Applying k-Fold Cross Validation
    accuracies = cross_val_score(estimator=regressor, x=X_train, y=Y_train, cv=5)
    CrossValMean = accuracies.mean()
    return {'loss':1-CrossValMean, 'status': STATUS_OK}

best = fmin(fn=objective,
            space=XGBOOST_OPTIM_SPACE,
            algo=tpe.suggest,
            max_evals=MAX_EVALS,
            trials=trials,
           return_argmin=False)

# Save the trials
pkl.dump(trials, open(PATH_TO_TRIALS, "wb"))

Now, I would like to make this code work on a distant serveur with more CPUs in order to allow parallelisation and gain time.

I saw that I can simply do that using the SparkTrials class of hyperopt instead ot Trials. But, SparkTrials objects cannot be saved with pickles. Do you have any idea on how I could save and reload my trials results stored in a Sparktrials object ?

Upvotes: 4

Views: 2297

Answers (1)

Sebastian Castano
Sebastian Castano

Reputation: 1621

so this might be a bit late, but after messing around a bit, I found a kind of hacky solution:

spark_trials= SparkTrials()
pickling_trials = dict()

for k, v in spark_trials.__dict__.items():
    if not k in ['_spark_context', '_spark']:
        pickling_trials[k] = v
        
pickle.dump(pickling_trials, open('pickling_trials.hyperopt', 'wb'))

The _spark_context and the _spark attributes of the SparkTrials instance are the culprits of not being able to serialize the object. It turns out that you dont need them if you want to re-use the object, because if you want to re-run the optimization again, a new spark context is created anyway, so you can re use the trials as:

new_sparktrials = SparkTrials()

for att, v in pickling_trials.items():
    setattr(new_sparktrials, att, v)

best = fmin(loss_func,
    space=search_space,
    algo=tpe.suggest,
    max_evals=1000,
    trials=new_sparktrials)

voilà :)

Upvotes: 4

Related Questions