Reputation: 3860
I am trying to deploy an Azure ML AutoML generated model with an ML Notebook (script is shortened for brevity):
automl_settings = {
"experiment_timeout_minutes": 20,
"primary_metric": 'AUC_weighted',
"max_concurrent_iterations": 8,
"max_cores_per_iteration": -1,
"enable_dnn": False,
"enable_early_stopping": True,
"validation_size": 0.3,
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
compute_target=compute_target,
blacklist_models=['LogisticRegression','MultinomialNaiveBayes','BernoulliNaiveBayes','LinearSVM','DecisionTree','RandomForest','ExtremeRandomTrees','LightGBM','KNN','SVM','StackEnsemble','VotingEnsemble'],
training_data=train_dataset,
label_column_name=target_column_name,
**automl_settings
)
automl_run = experiment.submit(automl_config, show_output=True)
best_run, fitted_model = automl_run.get_output()
best_run_metrics = best_run.get_metrics()
children = list(automl_run.get_children(recursive=True))
summary_df = pd.DataFrame(index=['run_id', 'run_algorithm',
'primary_metric', 'Score'])
goal_minimize = False
for run in children:
if('run_algorithm' in run.properties and 'score' in run.properties):
summary_df[run.id] = [run.id, run.properties['run_algorithm'],
run.properties['primary_metric'],
float(run.properties['score'])]
if('goal' in run.properties):
goal_minimize = run.properties['goal'].split('_')[-1] == 'min'
summary_df = summary_df.T.sort_values(
'Score',
ascending=goal_minimize).drop_duplicates(['run_algorithm'])
summary_df = summary_df.set_index('run_algorithm')
best_dnn_run_id = summary_df['run_id'].iloc[0]
best_dnn_run = Run(experiment, best_dnn_run_id)
model_dir = 'Model' # Local folder where the model will be stored temporarily
if not os.path.isdir(model_dir):
os.mkdir(model_dir)
best_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')
# Register the model
model_name = best_run.properties['model_name']
model_path=os.path.join("./outputs",'model.pkl')
description = 'My Model'
model = best_run.register_model(model_name=model_name, model_path=model_path, model_framework='AutoML', description = description, tags={'env': 'sandbox'})
# Deploy the Model
service_name = 'my-ml-service'
service = Model.deploy(ws, service_name, [model], overwrite=True)
service.wait_for_deployment(show_output=True)
Everything appears to run fine until I try to deploy the model:
--------------------------------------------------------------------------- UserErrorException Traceback (most recent call last) <ipython-input-48-5c72d1613c28> in <module>
3 service_name = 'my-service'
4
----> 5 service = Model.deploy(ws, service_name, [model], overwrite=True)
6
7
/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/core/model.py in deploy(workspace, name, models, inference_config, deployment_config, deployment_target, overwrite) 1577 logger=module_logger) 1578
-> 1579 return Model._deploy_no_code(workspace, name, models, deployment_config, deployment_target, overwrite) 1580 1581 # Environment-based webservice.
/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/core/model.py in _deploy_no_code(workspace, name, models, deployment_config, deployment_target, overwrite) 1795 :rtype: azureml.core.Webservice 1796 """
-> 1797 environment_image_request = build_and_validate_no_code_environment_image_request(models) 1798 1799 return Model._deploy_with_environment_image_request(workspace, name, environment_image_request,
/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/_model_management/_util.py in build_and_validate_no_code_environment_image_request(models) 1180 raise UserErrorException('You must provide an InferenceConfig when deploying a model with model_framework ' 1181 'set to {}. Default environments are only provided for these frameworks: {}.'
-> 1182 .format(model.model_framework, Model._SUPPORTED_FRAMEWORKS_FOR_NO_CODE_DEPLOY)) 1183 1184
# Only specify the model IDs; MMS will provide the environment, driver program, etc.
UserErrorException: UserErrorException: Message: You must provide an InferenceConfig when deploying a model with model_framework set to AutoML. Default environments are only provided for these frameworks: ['Onnx', 'ScikitLearn', 'TensorFlow']. InnerException None ErrorResponse {
"error": {
"code": "UserError",
"message": "You must provide an InferenceConfig when deploying a model with model_framework set to AutoML. Default environments are only provided for these frameworks: ['Onnx', 'ScikitLearn', 'TensorFlow']."
}
When deploying an AutoML generated model from the Azure Machine Learning Studio, I am not prompted to provide an entry script or dependencies file (or an InferenceConfig). Is there a way to configure this with the Python SDK so that I can "no code deploy" an AutoML generated model? Is there something wrong in my code? Hope you can help.
Upvotes: 4
Views: 1318
Reputation: 8606
I don't think you can rely on "no code" deployment in your scenario, because AutoML may find the best solution is from a framework that is not yet supported by "no code" deployment.
If it helps, you can create the InferenceConfig
from your Run
:
environment = best_run.get_context().get_environment()
inference_config = InferenceConfig(entry_script='score.py', environment=environment)
Upvotes: 4