Azad Md Abul Kalam
Azad Md Abul Kalam

Reputation: 135

Kubernetes daemonset creating two pods instead of one (expected)

I have the following local 2-node kubernetes cluster-

$ kubectl get nodes -o wide
NAME   STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
srl1   Ready    control-plane,master   2d18h   v1.21.2   xxx.xxx.12.58   <none>        Ubuntu 20.04.2 LTS   5.4.0-80-generic   docker://20.10.7
srl2   Ready    <none>                 2d18h   v1.21.3   xxx.xxx.80.72   <none>        Ubuntu 18.04.2 LTS   5.4.0-80-generic   docker://20.10.2

I am trying to deploy an application on using a cluster creation python scirpt(https://github.com/hydro-project/cluster/blob/master/hydro/cluster/create_cluster.py)

When it tries to create a routing node using apps_client.create_namespaced_daemon_set(namespace=util.NAMESPACE, body=yml) it is expected that it should create a single pod from the routing-ds.yaml (given below) file and assign it to the routing daemonset (kind). However as you can see, it is creating two routing pods instead of one on every physical node. (FYI-my master can schedule pod)

akazad@srl1:~/hydro-project/cluster$ kubectl get all -A -o wide
NAMESPACE        NAME                                           READY   STATUS    RESTARTS   AGE     IP               NODE   NOMINATED NODE   READINESS GATES
default          pod/management-pod                             1/1     Running   0          25m     192.168.190.77   srl2   <none>           <none>
default          pod/monitoring-pod                             1/1     Running   0          25m     192.168.120.71   srl1   <none>           <none>
default          pod/routing-nodes-9q7dr                        1/1     Running   0          24m    xxx.xxx.12.58    srl1   <none>           <none>
default          pod/routing-nodes-kfbnv                        1/1     Running   0          24m    xxx.xxx.80.72    srl2   <none>           <none>
kube-system      pod/calico-kube-controllers-7676785684-tpz7q   1/1     Running   0          2d19h   192.168.120.65   srl1   <none>           <none>
kube-system      pod/calico-node-lnxtb                          1/1     Running   0          2d19h   xxx.xxx.12.58    srl1   <none>           <none>
kube-system      pod/calico-node-mdvpd                          1/1     Running   0          2d19h   xxx.xxx.80.72    srl2   <none>           <none>
kube-system      pod/coredns-558bd4d5db-vfghf                   1/1     Running   0          2d19h   192.168.120.66   srl1   <none>           <none>
kube-system      pod/coredns-558bd4d5db-x7jhj                   1/1     Running   0          2d19h   xxx.xxx.120.67   srl1   <none>           <none>
kube-system      pod/etcd-srl1                                  1/1     Running   0          2d19h   xxx.xxx.12.58    srl1   <none>           <none>
kube-system      pod/kube-apiserver-srl1                        1/1     Running   0          2d19h   xxx.xxx.12.58    srl1   <none>           <none>
kube-system      pod/kube-controller-manager-srl1               1/1     Running   0          2d19h  xxx.xxx.12.58    srl1   <none>           <none>
kube-system      pod/kube-proxy-l8fds                           1/1     Running   0          2d19h   xxx.xxx.12.58    srl1   <none>           <none>
kube-system      pod/kube-proxy-szrng                           1/1     Running   0          2d19h   xxx.xxx.80.72    srl2   <none>           <none>
kube-system      pod/kube-scheduler-srl1                        1/1     Running   0          2d19h   xxx.xxx.12.58    srl1   <none>           <none>
metallb-system   pod/controller-6b78bff7d9-t7gjr                1/1     Running   0          2d19h   192.168.190.65   srl2   <none>           <none>
metallb-system   pod/speaker-qsqnc                              1/1     Running   0          2d19h   xxx.xxx.12.58    srl1   <none>           <none>
metallb-system   pod/speaker-s4pp8                              1/1     Running   0          2d19h   xxx.xxx.80.72    srl2   <none>           <none>

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  2d19h   <none>
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   2d19h   k8s-app=kube-dns

NAMESPACE        NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE     CONTAINERS          IMAGES                            SELECTOR
default          daemonset.apps/routing-nodes   2         2         2       2            2           <none>                   24m     routing-container   akazad1/srlanna:v2                role=routing
kube-system      daemonset.apps/calico-node     2         2         2       2            2           kubernetes.io/os=linux   2d19h   calico-node         calico/node:v3.14.2               k8s-app=calico-node
kube-system      daemonset.apps/kube-proxy      2         2         2       2            2           kubernetes.io/os=linux   2d19h   kube-proxy          k8s.gcr.io/kube-proxy:v1.21.3     k8s-app=kube-proxy
metallb-system   daemonset.apps/speaker         2         2         2       2            2           kubernetes.io/os=linux   2d19h   speaker             quay.io/metallb/speaker:v0.10.2   app=metallb,component=speaker


However, when it is directly creating pods from the management-pod.yaml (given below), it is creating one as expected.

Why the dasemonset is creating two pods instead of one?

Code segment where it is supposed to create a daemonset of type routing node

for i in range(len(kinds)):
        kind = kinds[i]

        # Create should only be true when the DaemonSet is being created for the
        # first time -- i.e., when this is called from create_cluster. After that,
        # we can basically ignore this because the DaemonSet will take care of
        # adding pods to created nodes.
        if create:
            fname = 'yaml/ds/%s-ds.yml' % kind
            yml = util.load_yaml(fname, prefix)

            for container in yml['spec']['template']['spec']['containers']:
                env = container['env']

                util.replace_yaml_val(env, 'ROUTING_IPS', route_str)
                util.replace_yaml_val(env, 'ROUTE_ADDR', route_addr)
                util.replace_yaml_val(env, 'SCHED_IPS', sched_str)
                util.replace_yaml_val(env, 'FUNCTION_ADDR', function_addr)
                util.replace_yaml_val(env, 'MON_IPS', mon_str)
                util.replace_yaml_val(env, 'MGMT_IP', management_ip)
                util.replace_yaml_val(env, 'SEED_IP', seed_ip)

            apps_client.create_namespaced_daemon_set(namespace=util.NAMESPACE,
                                                     body=yml)

        # Wait until all pods of this kind are running
        res = []
        while len(res) != expected_counts[i]:
            res = util.get_pod_ips(client, 'role='+kind, is_running=True)

        pods = client.list_namespaced_pod(namespace=util.NAMESPACE,
                                          label_selector='role=' +
                                          kind).items

        created_pods = get_current_pod_container_pairs(pods)

I have removed the nodeSelector from all the yaml files as I am running it on bare-metal cluster.

1 routing-ds.yaml

 14 
 15 apiVersion: apps/v1
 16 kind: DaemonSet
 17 metadata:
 18   name: routing-nodes
 19   labels:
 20     role: routing
 21 spec:
 22   selector:
 23     matchLabels:
 24       role: routing
 25   template:
 26     metadata:
 27       labels:
 28         role: routing
 29     spec:
 30       #nodeSelector:
 31        # role: routing
 32      
 33       hostNetwork: true
 34       containers:
 35       - name: routing-container
 36         image: akazad1/srlanna:v2
 37         env:
 38         - name: SERVER_TYPE
 39           value: r
 40         - name: MON_IPS
 41           value: MON_IPS_DUMMY
 42         - name: REPO_ORG
 43           value: hydro-project
 44         - name: REPO_BRANCH
 45           value: master

2 management-pod.yaml

 15 apiVersion: v1 
 16 kind: Pod 
 17 metadata: 
 18   name: management-pod 
 19   labels: 
 20     role: management 
 21 spec: 
 22   restartPolicy: Never 
 23   containers: 
 24   - name: management-container 
 25     image: hydroproject/management 
 26     env: 
 27             #- name: AWS_ACCESS_KEY_ID 
 28             #value: ACCESS_KEY_ID_DUMMY 
 29             #- name: AWS_SECRET_ACCESS_KEY 
 30             #value: SECRET_KEY_DUMMY 
 31             #- name: KOPS_STATE_STORE 
 32             # value: KOPS_BUCKET_DUMMY 
 33     - name: HYDRO_CLUSTER_NAME 
 34       value: CLUSTER_NAME 
 35     - name: REPO_ORG 
 36       value: hydro-project 
 37     - name: REPO_BRANCH 
 38       value: master 
 39     - name: ANNA_REPO_ORG 
 40       value: hydro-project 
 41     - name: ANNA_REPO_BRANCH 
 42       value: master 
 43       # nodeSelector: 
 44       #role: general

Upvotes: 1

Views: 1757

Answers (1)

Harsh Manvar
Harsh Manvar

Reputation: 30083

May you have misunderstanding you have to use the kind: deployment if you want to manage the replicas (PODs - 1,2,3...n) on Kubernetes.

Daemon set behavior is like it will run the POD on each available node in the cluster.

So inside your cluster, there are two nodes so daemon set will run the POD on each available node. If you will increase the Node deamon set will auto-create the POD on that node also.

kind: Pod

will create the single POD only which is its default behavior.

The following are some of the Kubernetes Objects:

pods
ReplicationController (Manages Pods)
Deployment (Manages Pods)
StatefulSets
DaemonSets

You can read more at : https://chkrishna.medium.com/kubernetes-objects-e0a8b93b5cdc

Official document : https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/

If you want to manage POD using any type of controller kind: deployment is best. you can scale the replicas and scale down. You can also mention the replicas in YAML 1,2,3 and that way it will be running on cluster based on number.

Upvotes: 2

Related Questions