Reputation: 83
i am currently getting into Azure Machine Learning. I am trying out the learning path for Data Scientists. In that learning path, the Designer is introduced, where Pipelines are being published to be consumed as a real time inference pipeline.
Since I dont want to use the Designer all the time I want to do the same in python. All the tutorials on Microsoft Learn only show how to deploy a single model as a service (e.g. https://learn.microsoft.com/en-us/learn/modules/register-and-deploy-model-with-amls/). In those tutorials pipelines are only created to train models, but not for predictions on incoming data (https://learn.microsoft.com/en-us/learn/modules/create-pipelines-in-aml/). An entry script is used to load incoming data into the pretrained model. For me it is not clear how to implement pipeline steps inside this entry script. I checked online, but I couldnt find any explanation how to do this in a satisfying way. Is there a tutorial of any sort out there to do this?
I am thinking about those pipeline steps, because I would like to do some preprocessing to incoming data using the same scaler I used for training my model. In my eyes loading the training database every time new data is coming in to fit a scaler to the training dataset, seems way of an overload for (near-)real-time models.
I am guessing there is an easy way to do all this, but using the resources I found online, I couldnt come up with a suitable solution for this.
Best regards and Thank you in advance!
Upvotes: 4
Views: 613
Reputation: 2754
The “score.py” exposed in trained_model_outputs is for customized deployment, only have model init and scoring logic, user can add their own pre-process and post-process code on top of that. The scoring logic that having pre-process logic is available through Designer deployed web service, which can be on both AKS and ACI. You can follow this doc: Tutorial: Deploy ML models with the designer - Azure Machine Learning | Microsoft Docs.
Upvotes: 1
Reputation: 2754
Please follow this document,Basically you can register a trained model in Designer bring it out with SDK/CLI to deploy it.
Sharing a reference notebook from Nicholas.
How to deploy using environments can be found here model-register-and-deploy.ipynb . InferenceConfig class accepts source_directory and entry_script parameters, where source_directory is a path to the folder that contains all files(score.py and any other additional files) to create the image. This multi-model-register-and-deploy.ipynb has code snippets on how to create InferenceConfig with source_directory and entry_script.
Upvotes: 0