Reputation: 1886
In my AML pipeline, I've got a model built and deployed to the AciWebservice. I now have a need to include some additional data that would be used by score.py. This data is in json format (~1mb) and is specific to the model that's built. To accomplish this, I was thinking of sticking this file in blob store and updating some "placholder" vars in the score.py during deployment, but it seems hacky.
Here are some options I was contemplating but wasn't sure on the practicality
Option 1: Is it possible to include this file, during the model deployment itself so that it's part of the docker image?
Option 2: Another possibility I was contemplating, would it be possible to include this json data part of the Model artifacts?
Option 3: How about registering it as a dataset and pull that in the score file?
What is the recommended way to deploy dependent files in a model deployment scenario?
Upvotes: 3
Views: 2345
Reputation: 2497
To extend the answer by @Roope Astala - MSFT, this is how you can implement it by using the second approach
Put the file in a local folder, and specify that folder as source_directory in InferenceConfig. In this approach the file is re-uploaded every time you deploy a new endpoint.
Let's say this is your file structure.
.
└── deployment
├── entry.py
├── env.yml
└── files
├── data.txt
And you want to read the files/names.txt
in entry.py
script.
This is how would you read it in entry.py
:
file_path = 'deployment/files/data.txt'
with open(file_path, 'r') as f:
...
And this is how you would set up your deployment configuration.
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
inference_config = InferenceConfig(
runtime='python',
source_directory='deployment',
entry_script='entry.py',
conda_file='env.yml'
)
Upvotes: 0
Reputation: 756
There are few ways to accomplish this:
Put the additional file in the same folder as your model file, and register the whole folder as the model. In this approach the file is stored alongside the model.
Put the file in a local folder, and specify that folder as source_directory in InferenceConfig. In this approach the file is re-uploaded every time you deploy a new endpoint.
Use custom base image in InferenceConfig to bake the file into Docker image itself.
Upvotes: 2