aknuds1
aknuds1

Reputation: 67997

Does kubectl replace not update environment variables?

I am adding an environment variable to a Kubernetes replication controller spec, but when I update the running RC from the spec, the environment variable isn't added to it. How come?

I update the RC according to the following spec, where the environment variable IRON_PASSWORD gets added since the previous revision, but the running RC isn't updated correspondingly, kubectl replace -f docker/podspecs/web-controller.yaml:

apiVersion: v1
kind: ReplicationController
metadata:
  name: web
  labels:
    app: web
spec:
  replicas: 1
  selector:
    app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: quay.io/aknuds1/muzhack
        # Always pull latest version of image
        imagePullPolicy: Always
        env:
        - name: APP_URI
          value: https://staging.muzhack.com
        - name: IRON_PASSWORD
          value: password
        ports:
        - name: http-server
          containerPort: 80
      imagePullSecrets:
      - name: quay.io

After updating the RC according to the spec, it looks like this (kubectl get pod web-scpc3 -o yaml), notice that IRON_PASSWORD is missing:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/created-by: |
      {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web","uid":"c1c4185f-0867-11e6-b557-42010af000f7","apiVersion":"v1","resourceVersion":"17714"}}
    kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
      web'
  creationTimestamp: 2016-04-22T08:54:00Z
  generateName: web-
  labels:
    app: web
  name: web-scpc3
  namespace: default
  resourceVersion: "17844"
  selfLink: /api/v1/namespaces/default/pods/web-scpc3
  uid: c1c5035f-0867-11e6-b557-42010af000f7
spec:
  containers:
  - env:
    - name: APP_URI
      value: https://staging.muzhack.com
    image: quay.io/aknuds1/muzhack
    imagePullPolicy: Always
    name: web
    ports:
    - containerPort: 80
      name: http-server
      protocol: TCP
    resources:
      requests:
        cpu: 100m
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-vfutp
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: quay.io
  nodeName: gke-staging-default-pool-f98acf11-ba7d
  restartPolicy: Always
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: default-token-vfutp
    secret:
      secretName: default-token-vfutp
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2016-04-22T09:00:49Z
    message: 'containers with unready status: [web]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  containerStatuses:
  - containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a
    image: quay.io/aknuds1/muzhack
    imageID: docker://8fef42c3eba5abe59c853e9ba811b3e9f10617a257396f48e564e3206e0e1103
    lastState:
      terminated:
        containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a
        exitCode: 1
        finishedAt: 2016-04-22T09:00:48Z
        reason: Error
        startedAt: 2016-04-22T09:00:46Z
    name: web
    ready: false
    restartCount: 6
    state:
      waiting:
        message: Back-off 5m0s restarting failed container=web pod=web-scpc3_default(c1c5035f-0867-11e6-b557-42010af000f7)
        reason: CrashLoopBackOff
  hostIP: 10.132.0.3
  phase: Running
  podIP: 10.32.0.3
  startTime: 2016-04-22T08:54:00Z

Upvotes: 1

Views: 5025

Answers (1)

Alex Robinson
Alex Robinson

Reputation: 13377

Replacing the ReplicationController object does not actually recreate the underlying pods, so the pods keep the spec from the previous configuration of the RC until they need to be recreated. If you delete the running pod, the new one that gets created to replace it will have the new environment variable.

This is what the kubectl rolling update command is for, and a part of the reason why the Deployment type was added to Kubernetes 1.2.

Upvotes: 3

Related Questions