Peter
Peter

Reputation: 1798

Inject Google Artifact Registry credentials to Docker build

We have a Google Artifact Registry for our Python packages. Authentication works like this. Locally it works well.

However, how do I pass credentials to Docker build when I want to build a Docker image that needs to install a package from our private registry?

I'd like to keep the Dockerfile the same when building with a user account or with a service account.

This works, but I'm not sure it's best practice:

FROM python:3.9

RUN pip install keyring keyrings.google-artifactregistry-auth

COPY requirements.txt .

RUN --mount=type=secret,id=creds,target=/root/.config/gcloud/application_default_credentials.json \
    pip install -r requirements.txt

Then build with:

docker build --secret="id=creds,src=$HOME/.config/gcloud/application_default_credentials.json" .

Upvotes: 14

Views: 5210

Answers (3)

Prabhjot Singh Rai
Prabhjot Singh Rai

Reputation: 2545

Although this might be late, you can use the combination of keyring and build args as follows:

Follow the steps listed here until step 4 on a secure machine. Then use the --extra-index-url in pip install --extra-index-url as the one output under the section "# Insert the following snippet into your pip.conf" of the command (step 4):

gcloud artifacts print-settings python --project=PROJECT \
    --repository=REPOSITORY \
    --location=LOCATION

You can pass this as build arguments.

Upvotes: 0

Robino
Robino

Reputation: 4929

The most secure, and current best practice, for accessing private resources during the docker build ... is using docker secrets, as you have in the OP.

You probably won't want to use the keyring again after this, so you could consider uninstalling it and following up with a multistage build for further security and reduced image size.

There are exceptions, for example if this is just a base image and application images will reuse the keyring, with another RUN --mount command. But even in this case, a final multistage build could be of help for the same reasons.

The result will be a clean python docker image, with just the required packages installed, on a single layer and with nothing else that is not required for production, such as the keyring packages.

Upvotes: 0

LondonAppDev
LondonAppDev

Reputation: 9663

Using keyring is great when working locally, but in my opinion it's not the best solution for a Dockerfile. This is because your only options are to mount volumes at build time (which I feel is messy) or to copy your credentials into the Dockerfile (which I feel is insecure).

Instead, I got this working by doing the following in Dockerfile:

FROM python:3.10

ARG AUTHED_ARTIFACT_REG_URL
COPY ./requirements.txt /requirements.txt

RUN pip install --extra-index-url ${AUTHED_ARTIFACT_REG_URL} -r /requirements.txt

Then, to build your Dockerfile you can run:

docker build --build-arg AUTHED_ARTIFACT_REG_URL=https://oauth2accesstoken:$(gcloud auth print-access-token)@url-for-artifact-registry

Although it doesn't seem to be in the official docs for Artifact Registry, this works as an alternative to using keychain. Note that the token generated by gcloud auth print-access-token is valid for 1 hour.

Upvotes: 15

Related Questions