red888
red888

Reputation: 31652

keyrings.google-artifactregistry.auth for python AR repo without a service account JSON key

I don't use non-expiring service account JSON keys; I use ambient creds and OIDC everywhere, so I can't generate and use json keys for AR.

I want to do pip installs in my Dockerfiles to install my private AR packages.

Doing something like this works:

# Passing in a non expiring key
docker build -f "Dockerfile" \
    --secret id=gcp_creds,src="gsa_key.json" .

##### IN DOCKERFILE
# I don't want to generate json keys like this
ENV GOOGLE_APPLICATION_CREDENTIALS=/run/secrets/gcp_creds

RUN pip install keyring keyrings.google-artifactregistry-auth 
COPY requirements.txt requirements.txt
RUN --mount=type=secret,id=gcp_creds \
    --mount=type=cache,target=target=/root/.cache \
    pip install -r requirements.txt

Does keyrings.google-artifactregistry-auth support tokens somehow?

This did not work for example:

# GSA is active through ambient creds or impersonation
gcloud auth print-access-token > /tmp/token.json
# also tried: gcloud config config-helper --format="json(credential)" > /tmp/token.json

docker build -f "Dockerfile" \
    --secret id=gcp_creds,src="/tmp/token.json" .

I'd like to avoid resorting to doing something like building outside of the container and copying the built artifacts in the image. I want to do the build inside the Dockerfile.

EDIT

So I got this working with GCP provided GH actions tooling so I know it's possible to generate temp tokens in a format the keyring will accept. I still want to know how to do this with the gcloud cli for other scenarios.

Using the Dockerfile above and these GH actions steps I can auth via OIDC and generate a creds file the pip keyring provider accepts:

- id: 'auth'
  name: 'auth to gcp'
  uses: 'google-github-actions/auth@v2'
  with:
    token_format: 'access_token'
    workload_identity_provider: 'projects/xxxxx/locations/global/workloadIdentityPools/github/providers/github'
    service_account: xxxxxxxx

- name: Set up Docker Buildx
  uses: docker/setup-buildx-action@v3

- uses: docker/build-push-action@v6
  with:
    push: false
    context: '.'
    file: 'Dockerfile'
    tags: myimage:mytag
    load: true
    cache-from: type=gha,scope=localbuild
    cache-to: type=gha,mode=max,scope=localbuild
    secret-files: |
      gcp_creds=${{steps.auth.outputs.credentials_file_path}}

Upvotes: 1

Views: 354

Answers (3)

Idriss Neumann
Idriss Neumann

Reputation: 3838

The Mazlum's answer is completely valid. However, here's another solution which is more following the OCI standard and will work with every build tool in an agnostic way :

1/ You can generate the credential file dynamically (using the gcloud auth print-access-token > creds.json cli in your pipeline) and inject its content either using an ARG in your build stage of your Dockerfile.

2/ Then be sure that this creds file will not remain in the final image: either using multistage build which by design will copy only the result of the build in your second stage. Either adding a rm -f command in your Dockerfile in the same RUN that is writing the creds.json file.

3/ If you want to still benefit from the cache because it'll be detected as changed everytime and if you don't want to loose too much time redownloading all the artefacts with pip, npm, whatever: I recommand using ORAS https://oras.land/

It'll save you hours of pipelines sometimes :D

4/ Don't forget to remove the generated creds file in your CI/CD runner... (you can avoid the tmp file if you're injecting directly the output of gcloud auth print-access-token as an ARG).

Upvotes: 2

Mazlum Tosun
Mazlum Tosun

Reputation: 6582

Sorry for the misunderstanding in my previous answer.

If you're looking for a solution based on the gcloud CLI for different scenarios, your approach is almost correct.

You can pass the access token, as shown in your post, and use it as a Docker secret mount:

gcloud auth print-access-token > /tmp/token.json

docker build -f "Dockerfile" \
    --secret id=gcp_creds,src="/tmp/token.json" .

In the Dockerfile, you can directly use this token (mounted as a secret volume) in the private repository URL within the extra-index-url parameter.

With this approach, the Keyring packages are typically not required.

For example:

COPY requirements.txt requirements.txt

RUN --mount=type=secret,id=gcp_creds,target=/run/secrets/gcp_creds \
    TOKEN=$(cat /run/secrets/gcp_creds) && \
    pip install --extra-index-url "https://oauth2accesstoken:${TOKEN}@europe-west1-docker.pkg.dev/my-project/my-repo/pypi" -r requirements.txt

The extra-index-url follows this format for an Artifact Registry URL when using an access token:

https://oauth2accesstoken:${TOKEN}@{artifact_registry_repo_url}

One key advantage of using a secret volume mount is that it does not invalidate the Docker cache. If your requirements.txt file remains unchanged but your token changes, Docker will still reuse the cached layers, ensuring faster builds.

It's really important to use secret volume mount instead of Args, because passing secrets in --build-arg is not secure. It will remain embedded in the image metadata.

It isn't recommended to use build arguments for passing secrets such as user credentials, API tokens, etc. Build arguments are visible in the docker history command and in max mode provenance attestations, which are attached to the image by default if you use the Buildx GitHub Actions and your GitHub repository is public.

See Docker Args doc

Upvotes: 3

yannco
yannco

Reputation: 176

There are different methods to authenticate to artifact registry. Since you prefer using temporary credentials, you can try using a short-lived OAuth access token for password authentication. However, keyrings.google-artifactregistry-auth relies on ADC or json credentials so the access token should follow a specific json file format.

As for AR authentication during docker build time, it is best practice to generate the access token using gcloud print-access–token before performing docker build.

I came across this StackOverflow post about injecting credentials to docker build. You can follow the suggestions made in the thread and see if those would help.

Upvotes: 0

Related Questions