Nag
Nag

Reputation: 2057

Kubernetes - Scope Of match labels, selectors

I had the following deployment,

    apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-deploy
spec:
  replicas: 10
  selector:
    matchLabels:
      app: hello-world
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-pod
        image: nginx:latest
        ports:
        - containerPort: 80

Does this selector have scope only for the pods managed by deployment or any pods which have the same label - I am trying to understand the scope of the selectors

Upvotes: 2

Views: 3270

Answers (2)

DT.
DT.

Reputation: 3569

First Note : The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a Deployment creates or adopts.

This label ensures that child ReplicaSets of a Deployment do not overlap. It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, and in any existing Pods that the ReplicaSet might have.

So now you can test how selector work when used on deployment as below

First create a nginx POD as below with a label.

$ kubectl run nginx-1 --image=nginx --restart=Never --labels=run=nginx
$ kubectl get pods -o wide --show-labels
NAME                         READY   STATUS              RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES   LABELS
pod/nginx-1                  1/1     Running             0          11m   192.168.58.198   k8s-node02   <none>           <none>            run=nginx

Now create a deployment say with 4 replicas as below and pass the same label as previous pod at create time.

Note pod-template-hash is added as one extra label on deployment at creation on below logs.

$ kubectl run nginx --image=nginx --labels=run=nginx --replicas=4

$ kubectl get all -o wide --show-labels
NAME                         READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES   LABELS
pod/nginx-1                  1/1     Running   0          9m37s   192.168.58.198   k8s-node02   <none>           <none>            run=nginx
pod/nginx-6db489d4b7-45tvm   1/1     Running   0          5m20s   192.168.85.197   k8s-node01   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-6np5m   1/1     Running   0          5m20s   192.168.85.198   k8s-node01   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-g5spg   1/1     Running   0          5m20s   192.168.58.200   k8s-node02   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-wgm7h   1/1     Running   0          2m52s   192.168.58.202   k8s-node02   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE    SELECTOR   LABELS
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   7d3h   <none>     component=apiserver,provider=kubernetes

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR    LABELS
deployment.apps/nginx   4/4     4            4           5m20s   nginx        nginx    run=nginx   run=nginx

NAME                               DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES   SELECTOR                                 LABELS
replicaset.apps/nginx-6db489d4b7   4         4         4       5m20s   nginx        nginx    pod-template-hash=6db489d4b7,run=nginx   pod-template-hash=6db489d4b7,run=nginx

So now we will have 4 pod of deployment and one more pod with name nginx-1 running.

So now if we edit the first POD to include the same pod-template-hash you will see the replica set will immediately scale down the number of pods to match the replica set number of 4.

We edit the nginx-1 pod and add the pod-hash template

$ kubectl edit pod nginx-1
pod/nginx-1 edited

$ kubectl get all -o wide --show-labels
NAME                         READY   STATUS        RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES   LABELS
pod/nginx-1                  1/1     Running       0          21m     192.168.58.198   k8s-node02   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-kx6xr   1/1     Running       0          9m31s   192.168.85.200   k8s-node01   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-s47n7   1/1     Running       0          9m31s   192.168.85.199   k8s-node01   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-vv2t4   0/1     Terminating   0          9m31s   192.168.58.204   k8s-node02   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-xmqns   1/1     Running       0          9m31s   192.168.58.203   k8s-node02   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE    SELECTOR   LABELS
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   7d3h   <none>     component=apiserver,provider=kubernetes

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR    LABELS
deployment.apps/nginx   4/4     4            4           9m31s   nginx        nginx    run=nginx   run=nginx

NAME                               DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES   SELECTOR                                 LABELS
replicaset.apps/nginx-6db489d4b7   4         4         4       9m31s   nginx        nginx    pod-template-hash=6db489d4b7,run=nginx   pod-template-hash=6db489d4b7,run=nginx

So you will see the deployment has removed one of its own pod to keep the replicaset number correctly at 4

$ kubectl get all -o wide --show-labels
NAME                         READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES   LABELS
pod/nginx-1                  1/1     Running   0          27m   192.168.58.198   k8s-node02   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-kx6xr   1/1     Running   0          16m   192.168.85.200   k8s-node01   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-s47n7   1/1     Running   0          16m   192.168.85.199   k8s-node01   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx
pod/nginx-6db489d4b7-xmqns   1/1     Running   0          16m   192.168.58.203   k8s-node02   <none>           <none>            pod-template-hash=6db489d4b7,run=nginx

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE    SELECTOR   LABELS
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   7d3h   <none>     component=apiserver,provider=kubernetes

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR    LABELS
deployment.apps/nginx   4/4     4            4           16m   nginx        nginx    run=nginx   run=nginx

NAME                               DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES   SELECTOR                                 LABELS
replicaset.apps/nginx-6db489d4b7   4         4         4       16m   nginx        nginx    pod-template-hash=6db489d4b7,run=nginx   pod-template-hash=6db489d4b7,run=nginx

Hope this example will help you understand how the label and selector work and there scope.

Upvotes: 2

Methkal Khalawi
Methkal Khalawi

Reputation: 2477

Your Deployment selector here selects the pods with label "app: hello-world" which is also managed by your Deployment.

In general, and I quote from the official documentation:

Note: You must specify an appropriate selector and Pod template labels in a Deployment (in this case, app: hello-world). Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). Kubernetes doesn’t stop you from overlapping, and if multiple controllers have overlapping selectors those controllers might conflict and behave unexpectedly.

Upvotes: 3

Related Questions