Dan-Dev
Dan-Dev

Reputation: 9430

Kubernetes kubectl shows pods restarts as zero but pods age has changed

Can somebody explain why the following command shows that there have been no restarts but the age is 2 hours when it was started 17 days ago

kubectl get pod -o wide
NAME                  READY     STATUS    RESTARTS   AGE       IP                NODE
api-depl-nm-xxx       1/1       Running   0          17d       xxx.xxx.xxx.xxx   ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal
ei-depl-nm-xxx        1/1       Running   0          2h        xxx.xxx.xxx.xxx   ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal
jenkins-depl-nm-xxx   1/1       Running   0          2h        xxx.xxx.xxx.xxx   ip-xxx-xxx-xxx-xxx.eu-west-1.compute.internal

The deployments have been running for 17 days:

kubectl get deploy -o wide
NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE       CONTAINER(S)      IMAGE(S) SELECTOR                                                          
api-depl-nm       1         1         1            1           17d       api-depl-nm       xxx   name=api-depl-nm
ei-depl-nm        1         1         1            1           17d       ei-depl-nm        xxx   name=ei-depl-nm
jenkins-depl-nm   1         1         1            1           17d       jenkins-depl-nm   xxx   name=jenkins-depl-nm

The start time was 2 hours ago:

kubectl describe po ei-depl-nm-xxx | grep Start
Start Time:     Tue, 24 Jul 2018 09:07:05 +0100
Started:        Tue, 24 Jul 2018 09:10:33 +0100

The application logs show it restarted. So why is the restarts 0?

Updated with more information as a response to answer.

I may be wrong but I don't think the deployment was updated or scaled it certainly was not done be me and no one else has access to the system.

 kubectl describe deployment ei-depl-nm

 ...
CreationTimestamp:      Fri, 06 Jul 2018 17:06:24 +0100
Labels:                 name=ei-depl-nm
...
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
...
Conditions:
  Type          Status  Reason
  ----          ------  ------
  Available     True    MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet:  ei-depl-nm-xxx (1/1 replicas created)
Events:         <none>

I may be wrong but I don't think the worker node was restarted or shut down

kubectl describe nodes ip-xxx.eu-west-1.compute.internal

    Taints:                 <none>
CreationTimestamp:      Fri, 06 Jul 2018 16:39:40 +0100
Conditions:
  Type                  Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----                  ------  -----------------                       ------------------                      ------                          -------
  NetworkUnavailable    False   Fri, 06 Jul 2018 16:39:45 +0100         Fri, 06 Jul 2018 16:39:45 +0100         RouteCreated                    RouteController created a route
  OutOfDisk             False   Wed, 25 Jul 2018 16:30:36 +0100         Fri, 06 Jul 2018 16:39:40 +0100         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  MemoryPressure        False   Wed, 25 Jul 2018 16:30:36 +0100         Wed, 25 Jul 2018 02:23:01 +0100         KubeletHasSufficientMemory      kubelet has sufficient memory available
  DiskPressure          False   Wed, 25 Jul 2018 16:30:36 +0100         Wed, 25 Jul 2018 02:23:01 +0100         KubeletHasNoDiskPressure        kubelet has no disk pressure
  Ready                 True    Wed, 25 Jul 2018 16:30:36 +0100         Wed, 25 Jul 2018 02:23:11 +0100         KubeletReady                    kubelet is posting ready status
......
Non-terminated Pods:            (4 in total)
  Namespace                     Name                                             CPU Requests    CPU Limits      Memory Requests Memory Limits
  ---------                     ----                                                            ------------    ----------      --------------- -------------
  default                       ei-depl-nm-xxx                                     100m (5%)       0 (0%)          0 (0%)          0 (0%)
  default                       jenkins-depl-nm-xxx                                100m (5%)       0 (0%)          0 (0%)          0 (0%)
  kube-system                   kube-dns-xxx                                      260m (13%)      0 (0%)          110Mi (1%)      170Mi (2%)
  kube-system                   kube-proxy-ip-xxx.eu-west-1.compute.internal            100m (5%)       0 (0%)          0 (0%)          0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits      Memory Requests Memory Limits
  ------------  ----------      --------------- -------------
  560m (28%)    0 (0%)          110Mi (1%)      170Mi (2%)
Events:         <none>

Upvotes: 10

Views: 8170

Answers (1)

VAS
VAS

Reputation: 9031

There are two things that might happen:

  1. The deployment was updated or scaled:

    • age of deployment does not change
    • new ReplicaSet is created, old ReplicaSet is deleted. You can check it by running

      $ kubectl describe deployment <deployment_name>  
      ...
      Events:
        Type    Reason             Age   From                   Message
        ----    ------             ----  ----                   -------
        Normal  ScalingReplicaSet  1m    deployment-controller  Scaled up replica set testdep1-75488876f6 to 1
        Normal  ScalingReplicaSet  1m    deployment-controller  Scaled down replica set testdep1-d4884df5f to 0
      
    • pods created by old ReplicaSet are terminated, new ReplicaSet created brand new pod with restarts 0 and age 0 sec.

  2. Worker node was restarted or shut down.

    • Pod on old worker node disappears
    • Scheduler creates a brand new pod on the first available node (it can be the same node after reboot) with restarts 0 and age 0 sec.
    • You can check the node start events by running

      kubectl describe nodes <node_name>
      ...
       Type    Reason                   Age                From                   Message
        ----    ------                   ----               ----                   -------
        Normal  Starting                 32s                kubelet, <node-name>     Starting kubelet.
        Normal  NodeHasSufficientPID     31s (x5 over 32s)  kubelet, <node-name>     Node <node-name> status is now: NodeHasSufficientPID
        Normal  NodeAllocatableEnforced  31s                kubelet, <node-name>     Updated Node Allocatable limit across pods
        Normal  NodeHasSufficientDisk    30s (x6 over 32s)  kubelet, <node-name>     Node <node-name> status is now: NodeHasSufficientDisk
        Normal  NodeHasSufficientMemory  30s (x6 over 32s)  kubelet, <node-name>     Node <node-name> status is now: NodeHasSufficientMemory
        Normal  NodeHasNoDiskPressure    30s (x6 over 32s)  kubelet, <node-name>     Node <node-name> status is now: NodeHasNoDiskPressure
        Normal  Starting                 10s                kube-proxy, <node-name>  Starting kube-proxy.
      

Upvotes: 9

Related Questions