Reputation: 31
I am trying to get inference from a deployed pretrained model on Sagemaker notebook environment. While executing the below line of code,
response = predictor.predict(serialized_data)
I am receiving an SSL error as below,
SSLError: SSL validation failed for https://runtime.sagemaker.us-east-1.amazonaws.com/endpoints/pytorch-inference-2024-06-11-13-39-50-210/invocations EOF occurred in violation of protocol (_ssl.c:2426)
Expected Behavior: I should be receiving the response as I tested it manually in the notebook environment without deploying the model and just loading the weights from S3 bucket and using the same piece of code for inference.
Code:
input_data = (interaction_data, mt_data) results = predict_fn(input_data, model)
Result:
Iteration at 0: auc 0.964, map 0.439
Current Behavior:
Assuming that Model gets executed properly as defined below,
pytorch_model = PyTorchModel(model_data=f's3://{model_bucket}/{model_key}', role=role, entry_point='inference.py', framework_version='1.8.1', # Specify the PyTorch version py_version='py3', sagemaker_session=sagemaker_session)
and the deployment takes place properly as,
predictor = pytorch_model.deploy(instance_type='ml.m5.large', initial_instance_count=1)
While executing response = predictor.predict(serialized_data)
,
I receive the below error,
SSLError: SSL validation failed for https://runtime.sagemaker.us-east-1.amazonaws.com/endpoints/pytorch-inference-2024-06-11-13-39-50-210/invocations EOF occurred in violation of protocol (_ssl.c:2426)
Reproduction Steps:
Define your data path
interaction_data = "s3://path_to_pkl/interaction.pkl"
auxiliary_data = "s3://path_to_pkl/auxiliary.pkl"
Define Model bucket
model_bucket = 'path_to_model_bucket' model_key = 'Model-Structure/model.tar.gz'
Arrange your data properly,
with open("interaction.pkl", 'rb') as f: data1 = CPU_Unpickler(f).load()
with open("auxiliary.pkl", 'rb') as f: data2= CPU_Unpickler(f).load()
serialized_data = pickle.dumps({ 'data1': data1, 'data2': data2 })
Define your model,
pytorch_model = PyTorchModel(model_data=f's3://{model_bucket}/{model_key}', role=role, entry_point='inference.py', framework_version='1.8.1', # Specify the PyTorch version py_version='py3', sagemaker_session=sagemaker_session)
Deploy the model
predictor = pytorch_model.deploy(instance_type='ml.m5.large', initial_instance_count=1)
Get the response
response = predictor.predict(serialized_data)
I also have an inference.py which I am using for model evaluation and obtaining the response, a general practice for sagemaker environment.
Upvotes: 3
Views: 336