Laasya
Laasya

Reputation: 61

How to deploy a custom model in AWS SageMaker?

I have a custom machine learning predictive model. I also have a user defined Estimator class that uses Optuna for hyperparameter tuning. I need to deploy this model to SageMaker so as to invoke it from a lambda function.

I'm facing trouble in the process of creating a container for the model and the Estimator.

I am aware that SageMaker has a scikit learn container which can be used for Optuna, but how would I leverage this to include the functions from my own Estimator class? Also, the model is one of the parameters passed to this Estimator class so how do I define it as a separate training job in order to make it an Endpoint?

This is how the Estimator class and the model are invoked:

sirf_estimator = Estimator(
    SIRF, ncov_df, population_dict[countryname],
    name=countryname, places=[(countryname, None)],
    start_date=critical_country_start
    )
sirf_dict = sirf_estimator.run()

where:

  1. Model Name : SIRF
  2. Cleaned Dataset : ncov_df

Would be really helpful if anyone could look into this, thanks a ton!

Upvotes: 6

Views: 2414

Answers (1)

Yoav Zimmerman
Yoav Zimmerman

Reputation: 608

The SageMaker inference endpoints currently rely on an interface based on Docker images. At the base level, you can set up a Docker image that runs a web server and responds to the endpoints on the ports that AWS require. This guide will show you how to do it: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html.

This is an annoying amount of work. If you're using a well-known framework they have a container library that contains some boilerplate code you might be able to reuse: https://github.com/aws/sagemaker-containers. You might be able to reuse some code from there, but customize it.

Or don't use SageMaker inference endpoints at all :) If your model can fit within the size / memory restrictions of AWS Lambda, that is an easier option!

Upvotes: 3

Related Questions