Temple Jersey
Temple Jersey

Reputation: 63

Sagemaker Batch Transform entry point

Before the AWS Sagemaker batch transform I need to do some transform. is it possible to have an custom script and associate as entry point to BatchTransformer?

Upvotes: 0

Views: 1027

Answers (3)

akshat garg
akshat garg

Reputation: 194

After training you would have model in model.tar.gz You have to create model using

model=sagemaker.model.Model(image_uri=None, model_data=None,entry_point='myinference.py',name=model_name...)
model.create(instance_type=...)

Within model object you can define model inference script ('myinference.py'). This script will have all or some methods like load_fn, input_fn, predict_fn or output_fn. For inference handler info - https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/default_inference_handler.py

model.create() creates a new model which can be deployed. While creating the transformer, you can just refer to this by model_name

batch_transformer=Transformer(
model_name=model_name,
instance_count=1,...)

Upvotes: 0

Temple Jersey
Temple Jersey

Reputation: 63

The inference code and requirement.txt should be stored as part of model.gz while training. They will be used in the batch transform!!

Upvotes: 0

Neil McGuigan
Neil McGuigan

Reputation: 48287

SageMaker Batch Transformations do their transformations using a Model. However, this model can also be a Serial Inference Pipeline model, which is basically two or more models, one running after the other

https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html

So, your first model could be one that does some transformations, then the second model does your predictions

It depends on what kind of transformations you're hoping to do. If it's reasonably straightforward, then use the scikit-learn image

Upvotes: 0

Related Questions