pedro malheiro
pedro malheiro

Reputation: 103

How to pass google cloud application credentials file to docker container

I would like to pass my Google Cloud Platform's service account JSON credentials file to a docker container so that the container can access a cloud storage bucket. So far I tried to pass the file as an environment parameter on the run command like this:

But nothing worked, and I always get the following error when running the docker container:

W external/org_tensorflow/tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.".

How to pass the google credentials file to a container running locally on my personal laptop?

Upvotes: 10

Views: 15860

Answers (5)

Jakub Kukul
Jakub Kukul

Reputation: 14494

A short answer that works on Linux and macOS. Assuming you had already run gcloud auth application-default login locally, just mount your local gcloud config directory as a volume into the container (via -v ~/.config/gcloud:/root/.config/gcloud), e.g:

docker run -v ~/.config/gcloud:/root/.config/gcloud my-image-name

Upvotes: 1

markwkiehl
markwkiehl

Reputation: 1

For Windows OS, the following works:

Create and run a new container from the latest image named "data-platform-pub" Option -v bind volume mounts the Google SDK folder Option -i keeps STDIN open even if not attached Option -t allocates a pseudo-TTY

docker run -it -v "%appdata%//gcloud"://root/.config/gcloud data-platform-pub:latest

NOTE: The command below is more minimal and also works:

docker run -it -v "%appdata%//gcloud//application_default_credentials.json"://root/.config/gcloud/application_default_credentials.json data-platform-pub:latest

I have a complete detailed example on how to do this in this free and public article: https://medium.com/@markwkiehl/containerization-using-docker-469a0fa9dd69

DO NOT set the environment variable GOOGLE_APPLICATION_CREDENTIALS !!! This overrides the ADC flow. You want the ADC flow to ultimately resolve the required credentials by using the metadata server, and the service account impersonation you have configured.

My articles also show how to run a single Python script that uses the Google SKD and Google services both locally in a Docker container (as just described), and as a Google Cloud Run Job from the Docker image uploaded to Google Artifact Registry.

Upvotes: 0

Zaffer
Zaffer

Reputation: 1809

I log into gcloud in my local environment then share that json file as a volume in the same location in the container.

Here is great post on how to do it with relevant extract below: Use Google Cloud user credentials when testing containers locally

Login locally

To get your default user credentials on your local environment, you have to use the gcloud SDK. You have 2 commands to get authentication:

gcloud auth login to get authenticated on all subsequent gcloud commands gcloud auth application-default login to create your ADC locally, in a “well-known” location.

Note location of credentials

The Google auth library tries to get a valid credentials by performing checks in this order

Look at the environment variable GOOGLE_APPLICATION_CREDENTIALS value. If exists, use it, else… Look at the metadata server (only on Google Cloud Platform). If it returns correct HTTP codes, use it, else… Look at “well-know” location if a user credential JSON file exists The “well-known” locations are

On linux: ~/.config/gcloud/application_default_credentials.json On Windows: %appdata%/gcloud/application_default_credentials.json

Share volume with container

Therefore, you have to run your local docker run command like this

ADC=~/.config/gcloud/application_default_credentials.json \ docker run \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
-v ${ADC}:/tmp/keys/FILE_NAME.json:ro \ <IMAGE_URL>

NB: this is only for local development, on Google Cloud Platform the credentials for the service are automatically inserted for you.

Upvotes: 7

gagan
gagan

Reputation: 355

Two ways to do it:

secrets - work with docker swarm mode.

  • create docker secrets
  • use secret with a container using --secret

Advantage being, secrets are encrypted. Secrets are decrypted when mounted to containers.

Upvotes: 0

Martin Zeitler
Martin Zeitler

Reputation: 76569

You cannot "pass" an external path, but have to add the JSON into the container.

Upvotes: 3

Related Questions