lo tolmencre
lo tolmencre

Reputation: 3954

MLFlow model endpoint deployment via azure results in dependency clash

I am trying to deploy a model via azure ml that was pushed with mlflow in a pretty simple setup.

I followed the guide here using this snippet

from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
import mlflow

ml_client = MLClient.from_config(credential=DefaultAzureCredential())
azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
mlflow.set_tracking_uri(azureml_mlflow_uri)

and doing the experiment with

mlflow.tensorflow.autolog()

with mlflow.start_run():
   ...

That all worked fine, and the model ended up under "Jobs" in the azure ml studio web UI. When I then follow the guide here, the deployment fails with:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.9.1 requires protobuf<3.20,>=3.9.2, but you have protobuf 4.21.7 which is incompatible.
tensorboard 2.9.1 requires protobuf<3.20,>=3.9.2, but you have protobuf 4.21.7 which is incompatible.

This is not a dependency that I configured, it is coming from azure or mlflow. Any idea what to do about that?

Upvotes: 0

Views: 498

Answers (1)

Sairam Tadepalli
Sairam Tadepalli

Reputation: 1683

Reproduced the same problem and achieved the task. The model was generated, and the dependencies are not objected the operation and it is suggestable to use any kind of coding editor like notebook in azure portal to avoid the library dependencies.

  1. Created the Azure Machine Learning Resource

enter image description here

  1. Create a project using the subscription key and resource group information. The code block which was mentioned in the question is used.

  2. Get the details of the subscription and then to manage the model use the below code block.

    The model folder produced from a job is registered. This includes the MLmodel file, model.pkl and the conda.yaml.

    model_path = "model" model_uri = 'runs:/{}/{}'.format(run_id, model_path) mlflow.register_model(model_uri,"registered_model_name")

  3. Go to the workspace which was created and check with the jobs and models in the left panel

enter image description here

  1. Go to models and check with the MLFlow created. Go to artifacts to get the model information.

enter image description here

  1. The complete information of the dependent libraries will be displayed under the Model

enter image description here

Upvotes: 0

Related Questions