Ramakrishnan M
Ramakrishnan M

Reputation: 492

In kubernetes, How to access the environment variable inside the configmap?

I have use-case to append pod name into "jdbc_db_url" property. which locate in "common-configmap.config" file. In order to achieve I have followed the below steps, but unfortunately unable to make it.

Step 1: common-configmap.config

# Database Properties
jdbc_auto_commit=false

jdbc_postgresql_driverClassName=org.postgresql.Driver
jdbc_db_url=jdbc:postgresql://dev.postgres.database.azure.com/dbname?ApplicationName=${POD_NAME}

Step 2: Using below command deploying configmap into cluster

kubectl create configmap common-configmap --from-env-file /app/conf/common-configmap.config -n default

Step 3: Using below manifest files created the "myapp" container and service

Deployment Manifest file:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: master
    meta.helm.sh/release-namespace: default
  generation: 1
  labels:
    app.kubernetes.io/instance: master
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: myapp
    app.kubernetes.io/version: 4.0.0
    helm.sh/chart: myapp-4.0.0
  name: myapp
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 5
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: master
      app.kubernetes.io/name: myapp
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "true"
      labels:
        app.kubernetes.io/instance: master
        app.kubernetes.io/name: myapp
    spec:
      containers:
      - env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        envFrom:
        - configMapRef:
            name: myapp-configmap
        - configMapRef:
            name: common-configmap
        image: docker.com/myapp:4.0.0
        imagePullPolicy: Always
        name: myapp
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always

Service Manifest file:

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: master
    meta.helm.sh/release-namespace: default
  labels:
    app.kubernetes.io/instance: master
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: myapp
    app.kubernetes.io/version: 4.0.0
    helm.sh/chart: myapp-4.0.0
  name: myapp
  namespace: default
spec:
  ports:
 - name: http
    port: 8080
    protocol: TCP
    targetPort: http
  selector:
    app.kubernetes.io/instance: master
    app.kubernetes.io/name: myapp
  sessionAffinity: None
  type: ClusterIP

verifying the result I have getting into a shell running container and print the environment variables:

kubectl exec --stdin --tty myapp-d5db776b9-h25q5 -c myapp -- /bin/sh

Actual Result:

# printenv

jdbc_auto_commit=false
jdbc_postgresql_driverClassName=org.postgresql.Driver
jdbc_db_url=jdbc:postgresql://dev.postgres.database.azure.com/dbname?ApplicationName=${POD_NAME}

Expected Result:

jdbc_auto_commit=false
jdbc_postgresql_driverClassName=org.postgresql.Driver
jdbc_db_url=jdbc:postgresql://dev.postgres.database.azure.com/dbname?ApplicationName=myapp-d5db776b9-h25q5

Thank you in advance for the help.

Upvotes: 1

Views: 4863

Answers (2)

zeisen
zeisen

Reputation: 125

A straight-forward way you can do that is to use a simple command in your deployment, assume the JDBC_URL value will be used only in the sample pod. Otherwise, you can have similar logic in your custom base image if you have any, or in an init container.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: master
    meta.helm.sh/release-namespace: default
  generation: 1
  labels:
    app.kubernetes.io/instance: master
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: myapp
    app.kubernetes.io/version: 4.0.0
    helm.sh/chart: myapp-4.0.0
  name: myapp
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 5
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: master
      app.kubernetes.io/name: myapp
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "true"
      labels:
        app.kubernetes.io/instance: master
        app.kubernetes.io/name: myapp
    spec:
      containers:
      - env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        envFrom:
        - configMapRef:
            name: myapp-configmap
        - configMapRef:
            name: common-configmap
        image: docker.com/myapp:4.0.0
        imagePullPolicy: Always
        name: myapp
        command: 
        - bash
        - -c
        - export JDBC_URL=$(echo $JDBC_URL | sed "s/POD_NAME/${POD_NAME}/1")
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always

Please note the code snippet I inserted:

    command: 
    - bash
    - -c
    - export JDBC_URL=$(echo $JDBC_URL | sed "s/POD_NAME/${POD_NAME}/1")

Upvotes: 0

Tom Klino
Tom Klino

Reputation: 2514

You can do that with an init-container which has a shared volume of type emptyDir between it and your main container.

First, edit your deployment to add an emptyDir volume and an init container, and mount the volume to both containers:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: master
    meta.helm.sh/release-namespace: default
  generation: 1
  labels:
    app.kubernetes.io/instance: master
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: myapp
    app.kubernetes.io/version: 4.0.0
    helm.sh/chart: myapp-4.0.0
  name: myapp
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 5
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: master
      app.kubernetes.io/name: myapp
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "true"
      labels:
        app.kubernetes.io/instance: master
        app.kubernetes.io/name: myapp
    spec:
      volumes: # emptyDir volume for the entire pod
        - name: config-volume
          emptyDir: {}
      initContainers: # an init container that will compile the env var with the pod name
      - name: config-compiler
        image: bash
        volumeMounts:
        - name: config-compiler
          mountPath: /configs
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        envFrom:
        - configMapRef:
            name: common-configmap
        command: 
        - bash
        - -c
        - 'echo $jdbc_db_url | sed "s/POD_NAME/${POD_NAME}/" > /configs/compiled.env'
      containers:
      - volumeMounts:
        - name: config-compiler
          mountPath: /configs
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        envFrom:
        - configMapRef:
            name: myapp-configmap
        - configMapRef:
            name: common-configmap
        image: docker.com/myapp:4.0.0
        imagePullPolicy: Always
        name: myapp
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always

Then, you'll need to make sure that your actual pod is reading the env vars file - you can either do that by editing your entrypoint script to add a line like so:

source /configs/compiled.env

or edit the the command of your pod to something like so:

command: [ 'source', '/configs/compiled.env;', 'previous-command' ]

Both of the above are a bit hack-ish - so what I recommend doing, is to see what config files your app reads by default, and to match your compiling script to those.

For example, if your app reads from /etc/myapp/confs.d/files.env - mount your empty dir to /etc/myapp/confs.d and have the init container write to files.env, and do so in the format your app expects (e.g. if it's an ini file instead of an env file, compile it so it will match that format)

And obviously there are better ways to compile a config file then with sed - but it's an option if you want (and can afford) to keep things short

Upvotes: 3

Related Questions