Reputation: 1628
I'm trying to avoid migrating an existing model training process to SageMaker and avoid creating a custom Docker container to host our trained model.
My hope was to inject our existing, trained model into the pre-built scikit learn container that AWS provides via the sagemaker-python-sdk. All of the examples that I have found require training the model first which creates the model/model configuration in SageMaker. This is then deployed with the deploy
method.
Is it possible to provide a trained model to the deploy
method and have it hosted in the pre-built scikit learn container that AWS provides?
For reference, the examples I've seen follow this order of operations:
sagemaker.sklearn.estimator.SKLearn
and providing a training scriptfit
method on itdeploy
method on the SKLearn
instance which automagically takes the model created in step 2/3 and deploys it in the pre-build scikit learn container as an HTTPS endpoint.Upvotes: 8
Views: 2249
Reputation: 124
Struggled with the same use case for a couple days.
We used sagemaker.model.Model class and sagemaker.pipeline.PipelineModel
Outlined our solution here.
How to handle custom transformation/ inference and requirements in sagemaker endpoints
Upvotes: 0
Reputation: 2739
Yes, you can import existing models to SageMaker.
For scikit-learn, you would use the SKLearnModel() object to load to model from S3 and create it in SageMaker. Then, you could deploy it as usual.
https://sagemaker.readthedocs.io/en/latest/sagemaker.sklearn.html
Here's a full example based on MXNet that will point you in the right direction: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_onnx_superresolution/mxnet_onnx.ipynb
Upvotes: 8