Conor
Conor

Reputation: 131

Deploying MLflow Model without Conda environment

Currently working on deploying my MLflow Model in a Docker container. The Docker container is set up with all the necessary dependencies for the model so it seems redundant for MLflow to also then create/activate a conda environment for the model. Looking at the documentation (https://www.mlflow.org/docs/latest/cli.html#mlflow-models-serve) it says you can serve the model with the --no-conda flag and that MLflow will assume you are “running within a Conda environment with the necessary dependencies”. This solution is working for us when we run in any environment with necessary dependencies, not necessarily a Conda environment. Is this correct? Or do we absolutely need to have a Conda environment active when running with the --no-conda flag?

For example, I can create a virtualenv and, with the virtualenv active, serve the model locally using mlflow models serve -m [model/path] --no-conda. The model then performs properly, but the documentation makes it sound like this shouldn’t work because it explicitly calls for a Conda environment.

Upvotes: 7

Views: 8347

Answers (1)

Andreas Klintberg
Andreas Klintberg

Reputation: 460

You do not need to have a Conda environment installed with the --no-conda option.

As described in the comment (Thanks @Nander Speersta) --no-conda is getting deprecated in newer versions of MLFlow for --env-manager=local.

From the Quickstart guide (https://www.mlflow.org/docs/latest/quickstart.html) it notes that it is fine as long as all dependencies are installed. Doesn't matter how you installed these dependencies (pipenv, poetry or pip).

Caveat being: this way you can't define dependencies for your project in MLFlow (since that uses conda to install these dependencies)

You should be able to safely continue your current practice.

Upvotes: 7

Related Questions