kilianc
kilianc

Reputation: 7796

Google Cloud Run and JSON in ENV variables

I am deploying a small Node.js app that uses config. NODE_CONFIG is read and used to supersede any local configuration. This is handy when deploying a service with secrets because the configuration can be injected from the outside.

When trying to achieve this with Google Cloud Run and the CLI, I am getting an escape error. Apparently only dictionaries are supported by the CLI.

Is there a better way to pass JSON content through env variables?

gcloud run deploy pr-$PULL_REQUEST \
  --platform=managed \
  --revision-suffix=$revision \
  --region us-central1 \
  --set-env-vars="NODE_ENV=development,NODE_CONFIG='$json'" \
  --allow-unauthenticated \
  --image gcr.io/...

Upvotes: 2

Views: 14260

Answers (2)

ahmet alp balkan
ahmet alp balkan

Reputation: 45216

If you have multiple environment variables and you insist on listing environment variables on gcloud CLI (instead of writing Service object in YAML + using gcloud alpha run services replace), you can simply repeat --set-env-vars:

gcloud run deploy \
  --set-env-vars="A=B" \
  --set-env-vars="C=D" \
  --image=gcr.io/cloudrun/hello

Here you can simply do "KEY=$value" with surrounding quotes included.

Quoting the value 1) prevents the argument from splitting in case $value contains spaces, and 2) escapes quotes inside $value which you will have because you have a json value.


Example

json='{"hello":"world"}'

gcloud run deploy foo \
  --set-env-vars="A=$json" \
  --set-env-vars="C=D" \
  --image=gcr.io/cloudrun/hello

gcloud run services describe [...] output:

  Env vars:
    A                {"hello":"world"}
    C                D

Upvotes: 6

guillaume blaquiere
guillaume blaquiere

Reputation: 75745

Indeed, it doesn't work with the CLI. When you set a comma between 2 values, it doesn't work.

You can use the @Kolban solution where you create a file in GCS and your only reference this file in your EnvVar. But this solution imply to update your code for getting and reading the file from Cloud Storage (more role, more deployment to perform). But it works, I use this mode for passing SQL Queries to my Cloud Run services, until now, it was the only solution!

However, there is a new beta solution, available since 2 weeks: you can use the services replace feature. This feature allows you to provide only a yaml file for describing your service. Of course, this YAML is compliant with knative API, and thereby, standard.

I tested with JSON and it works, but it imply some change in your deployment process (create the YAML file with the image and the env vars). First, create a minimal YAML file (named, for example: jsonenv.yaml)

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  #Name of the service
  name: jsonenv
spec:
  template:
    spec:
      containers:
      - env:
        - name: NODE_CONFIG
          value: '{"1":"2","3":"4"}'
        image: gcr.io/PROJECT_ID/CONTAINER_NAME

Then use this command

gcloud beta run services replace jsonenv.yaml --platform managed --region WHERE_YOU_WANT

That works. Hope this solve your issue.

EDIT

For naming your revision, you can add the metadata.name in the template like this:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  #Name of the service
  name: jsonenv 
spec:
  template:
    metadata:
      #Name of the revision
      name: my-revision-name
    spec:
      containers:

however, you can't manage the IAM role with this command. Thereby, the --allow-unauthenticated can't be used with the replace. You have to set this authorization manually

gcloud run services add-iam-policy-binding jsonenv \
    --member="allUsers" \
    --role="roles/run.invoker"

Note: You have to do this only once when you deploy the service at the first time, not at each deployment of revision

Upvotes: 1

Related Questions