exAres
exAres

Reputation: 4926

How to use input_example in MLFlow logged ONNX model in Databricks to make predictions?

I logged an ONNX model (converted from a pyspark model) in MLFlow like this:

with mlflow.start_run() as run:
    mlflow.onnx.log_model(
        onnx_model=my_onnx_model,
        artifact_path="onnx_model",
        input_example=input_example,
    )

where input_example is a Pandas dataframe that gets saved to artifacts.

On Databricks experiments page, I can see the model being logged along with input_example.json that indeed contains the data I provided as input_example while logging the model.

How to use that data now to make predictions for testing whether ONNX model was logged correctly or not? On model artifacts page in Databricks UI, I see:

from mlflow.models import validate_serving_input

model_uri = 'runs:/<some-model-id>/onnx_model'

# The logged model does not contain an input_example.
# Manually generate a serving payload to verify your model prior to deployment.
from mlflow.models import convert_input_example_to_serving_input

# Define INPUT_EXAMPLE via assignment with your own input example to the model
# A valid input example is a data instance suitable for pyfunc prediction
serving_payload = convert_input_example_to_serving_input(INPUT_EXAMPLE)

# Validate the serving payload works on the model
validate_serving_input(model_uri, serving_payload)

Upvotes: 0

Views: 77

Answers (0)

Related Questions