crazysnake99
crazysnake99

Reputation: 81

Docker image deploys locally but fails on Google Cloud Run

Here is my Dockerfile:

# Use lightweight Python image
FROM python:3.9-slim

ARG DOCKER_ENV

# PYTHONFAULTHANDLER=1 - Display trace if a sefault occurs.
# PYTHONUNBUFFERED=1 - Allow statements and log messages to immediately appear in the Knative logs
# PIP_NO_CACHE_DIR=off - Disable pip cache for smaller Docker images.
# PIP_DISABLE_PIP_VERSION_CHECK=on - Ignore pip new version warning.
# PIP_DEFAULT_TIMEOUT=100 - Give pip longer than the 15 second timeout. 
ENV DOCKER_ENV=${DOCKER_ENV} \
  PYTHONFAULTHANDLER=1 \
  PYTHONUNBUFFERED=1 \
  PIP_NO_CACHE_DIR=off \
  PIP_DISABLE_PIP_VERSION_CHECK=on \
  PIP_DEFAULT_TIMEOUT=100

# Install poetry 
RUN pip install

# Set working directory in container to /app
WORKDIR /app

# Copy only dependency requirements to container to cache them in docker layer
COPY poetry.lock pyproject.toml /app/

# Don't need virtualenv because environment is already isolated in a container
RUN poetry config virtualenvs.create false

# Install production dependencies 
RUN poetry install --no-dev --no-ansi

# Copy app into container 
COPY . /app

# Run server
CMD [ "poetry", "run" , "python", "api.py"]

I can build and deploy this locally no problem and the server starts. However, when I deploy to Cloud Run, I get the following error and the container fails:

Creating virtualenv indie-9TtSrW0h-py3.9 in /home/.cache/pypoetry/virtualenvs
File "/app/api.py", line 6, in <module>
    import jwt
ModuleNotFoundError: No module named 'jwt'

Does anybody have any idea why this successfully works locally but is missing a dependency in Cloud Run? One weird thing is that I explicitly telling docker to NOT use a virtual environment in the Dockerfile. This works when I run the image locally, but on Google Cloud it insists on building a virtual environment anyways. Is there some sort of incompatibility with Google Cloud Run's version of Docker and poetry that I'm missing here?

Upvotes: 7

Views: 2191

Answers (2)

John Carter
John Carter

Reputation: 55271

We had a similar problem - build succeeded but Cloud Run service start failed with a module not found, though with a slightly different config.

In our case, we didn't set virtualenvs.create.

This is one solution to the problem - set virtualenvs.in-project true

# Create virtualenv at .venv in the project instead of ~/.cache/
RUN poetry config virtualenvs.in-project true
# Install production dependencies 
RUN poetry install --no-dev --no-ansi

Another solution is to explicitly set the virtualenv path, so it doesn't rely on $HOME, eg:

# Create virtualenv at /venv in the project instead of ~/.cache/
RUN poetry config virtualenvs.path /venv
# Install production dependencies 
RUN poetry install --no-dev --no-ansi

Either way beware that poetry will use the virtualenv at ./.venv if it sees it, so ensure you have .venv/ in .dockerignore.

I'm pretty sure this is because of this issue which is causing Cloud Run to set $HOME=home if the container user is root.

According to this HN comment this is a bug and will be fixed: https://news.ycombinator.com/item?id=28678152 , see also "google cloud run" changes HOME to /home for CMD where RUN uses /root

I think the more general solution to this issue is set USER so poetry install isn't running as root.

Upvotes: 5

Qback
Qback

Reputation: 4908

It seems like for some reason Cloud Run runs CMD command in an isolated context from the rest of your Dockerfile. That's why poetry thinks that it should create a new virtualenv.

Workaround

(It works for me at 21.11.2021)

Instead of

CMD [ "poetry", "run" , "python", "api.py"]

Use:

CMD ["python", "api.py"]

It should work as your dependencies are already installed without virtualenv, so you don't need poetry at this point anymore.

Upvotes: 1

Related Questions