sam
sam

Reputation: 85

container labels in kubernetes

I am building my docker image with jenkins using:

docker build --build-arg VCS_REF=$GIT_COMMIT \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg BUILD_NUMBER=$BUILD_NUMBER -t $IMAGE_NAME\ 

I was using Docker but I am migrating to k8.

With docker I could access those labels via:

docker inspect --format "{{ index .Config.Labels \"$label\"}}" $container

How can I access those labels with Kubernetes ?

I am aware about adding those labels in .Metadata.labels of my yaml files but I don't like it that much because - it links those information to the deployment and not the container itself
- can be modified anytime
...

kubectl describe pods

Thank you

Upvotes: 8

Views: 7699

Answers (2)

Rotem jackoby
Rotem jackoby

Reputation: 22228

I'll add another option.

I would suggest reading about the Recommended Labels by K8S:

Key                           Description                     
app.kubernetes.io/name        The name of the application     
app.kubernetes.io/instance    A unique name identifying the instance of an application  
app.kubernetes.io/version     The current version of the application (e.g., a semantic version, revision hash, etc.)
app.kubernetes.io/component   The component within the architecture 
app.kubernetes.io/part-of     The name of a higher level application this one is part of
app.kubernetes.io/managed-by  The tool being used to manage the operation of an application

So you can use the labels to describe a pod:

apiVersion: apps/v1
kind: Pod # Or via Deployment
metadata:
  labels:
    app.kubernetes.io/name: wordpress
    app.kubernetes.io/instance: wordpress-abcxzy
    app.kubernetes.io/version: "4.9.4"
    app.kubernetes.io/managed-by: helm
    app.kubernetes.io/component: server
    app.kubernetes.io/part-of: wordpress

And use the downward api (which works in a similar way to reflection in programming languages).

There are two ways to expose Pod and Container fields to a running Container:

1 ) Environment variables.
2 ) Volume Files.

Below is an example for using volumes files:

apiVersion: v1
kind: Pod
metadata:
  name: kubernetes-downwardapi-volume-example
  labels:
    version: 4.5.6
    component: database
    part-of: etl-engine
  annotations:
    build: two
    builder: john-doe
spec:
  containers:
    - name: client-container
      image: k8s.gcr.io/busybox
      command: ["sh", "-c"]
      args:  # < ------ We're using the mounted volumes inside the container
      - while true; do
          if [[ -e /etc/podinfo/labels ]]; then
            echo -en '\n\n'; cat /etc/podinfo/labels; fi;
          if [[ -e /etc/podinfo/annotations ]]; then
            echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
          sleep 5;
        done;
      volumeMounts:
        - name: podinfo
          mountPath: /etc/podinfo
  volumes:   # < -------- We're mounting in our example the pod's labels and annotations
    - name: podinfo
      downwardAPI:
        items:
          - path: "labels"
            fieldRef:
              fieldPath: metadata.labels
          - path: "annotations"
            fieldRef:
              fieldPath: metadata.annotations

Notice that in the example we accessed the labels and annotations that were passed and mounted to the /etc/podinfo path.

Beside labels and annotations, the downward API exposed multiple additional options like:

  • The pod's IP address.
  • The pod's service account name.
  • The node's name and IP.
  • A Container's CPU limit , CPU request , memory limit, memory request.

See full list in here.


(*) A nice blog discussing the downward API.


(**) You can view all your pods labels with

$ kubectl get pods --show-labels
NAME                       ...         LABELS
my-app-xxx-aaa                         pod-template-hash=...,run=my-app
my-app-xxx-bbb                         pod-template-hash=...,run=my-app
my-app-xxx-ccc                         pod-template-hash=...,run=my-app 
fluentd-8ft5r                          app=fluentd,controller-revision-hash=...,pod-template-generation=2
fluentd-fl459                          app=fluentd,controller-revision-hash=...,pod-template-generation=2
kibana-xyz-adty4f                      app=kibana,pod-template-hash=...
recurrent-tasks-executor-xaybyzr-13456 pod-template-hash=...,run=recurrent-tasks-executor
serviceproxy-1356yh6-2mkrw             app=serviceproxy,pod-template-hash=...

Or viewing only specific label with $ kubectl get pods -L <label_name>.

Upvotes: -2

David Maze
David Maze

Reputation: 159998

Kubernetes doesn't expose that data. If it did, it would be part of the PodStatus API object (and its embedded ContainerStatus), which is one part of the Pod data that would get dumped out by kubectl get pod deployment-name-12345-abcde -o yaml.

You might consider encoding some of that data in the Docker image tag; for instance, if the CI system is building a tagged commit then use the source control tag name as the image tag, otherwise use a commit hash or sequence number. Another typical path is to use a deployment manager like Helm as the principal source of truth about deployments, and if you do that there can be a path from your CD system to Helm to Kubernetes that can pass along labels or annotations. You can also often set up software to know its own build date and source control commit ID at build time, and then expose that information via an informational-only API (like an HTTP GET /_version call or some such).

Upvotes: 2

Related Questions