Reputation: 53
I am currently shifting through a larger search space with Keras Tuner on a free Google Colab instance. Because of usage limits, my search run will be interrupted before completion. I would like to save the progress of my search periodically in anticipation of those interruptions, and simply resume from the last checkpoint when the Colab resources become available to me again. I found documentation on how to save specific models from a run, but I want to save the entire state of the search, including what was already tried and the results of those experiments.
Can I just call Tuner.get_state()
, save the result, and then resume from where I left off with Tuner.set_state()
? Or is there another way?
Upvotes: 2
Views: 3371
Reputation: 4289
You don't need to call tuner.get_state()
and tuner.set_state()
. While instantiating a Tuner
, say a RandomSearch
, as mentioned in the example,
# While creating the tuner for the first time
tuner = RandomSearch(
build_model,
objective="val_accuracy",
max_trials=3,
executions_per_trial=2,
directory="my_dir",
project_name="helloworld",
)
You need to set the arguments directory
and project_name
. The checkpoints are saved in this directory. You may save this directory as a ZIP file and download using files.download()
.
When you get a new Colab instance, unzip that archive and restore the my_dir
directory. Instantiate a Tuner
again using,
# While loading the Tuner
tuner = RandomSearch(
build_model,
objective="val_accuracy",
max_trials=3,
executions_per_trial=2,
directory="my_dir", # <----- Use the directory as you did earlier
overwrite=False, # <-------
project_name="helloworld",
)
Now start the search and you'll notice that the best params so far
hasn't changed. Also, tuner.results_summary()
returns all search results.
See the documentation for the Tuner
class here.
Upvotes: 6