Reputation: 315
Is it possible to use local compute for the TensorFlow estimator? Provisioning a virtual machine for a training run takes an enormous amount of time, and I would like to be able to try a few runs locally until my configuration is stable.
It is possible to do this with ScriptRunConfig by creating an empty RunConfiguration. The documentation claims that it is possible to create a local ComputeTarget, but the documentation on how to do this is missing:
Local computer
Create and attach: There's no need to create or attach a compute target to use your local computer as the training environment.
Configure: When you use your local computer as a compute target, the training code is run in your development environment. If that environment already has the Python packages you need, use the user-managed environment.
[!code-python]
Upvotes: 3
Views: 820
Reputation: 315
Use compute_target="local". Adapted Microsoft docs
script_params = {
'--num_epochs': 30,
'--output_dir': './outputs'
}
estimator = PyTorch(source_directory=project_folder,
script_params=script_params,
# compute_target=compute_target,
compute_target='local',
entry_script='pytorch_train.py',
use_gpu=True,
pip_packages=['pillow==5.4.1'])
Upvotes: 0
Reputation: 8221
I'd use the Microsoft docs directly, instead of the GitHub raw pages - I've noticed that the latter are sometimes incomplete and/or outdated.
As you suspect, the docs confirm that you should create an empty RunConfiguration, something like the following code (taken from the aforementioned link):
from azureml.core.runconfig import RunConfiguration
# Edit a run configuration property on the fly.
run_local = RunConfiguration()
run_local.environment.python.user_managed_dependencies = True
Upvotes: 1