user19238163
user19238163

Reputation:

Why I see pod running and also CrashLoopOff in k8s?

I am trying to create PostgreSQL pods on k8s. after the pod deploy when I use kubectl get pods I see out

NAME            READY        STATUS        RESTARTS         AGE   

pgadmin         1/1           Running       5                  9m
postgres        1/1           Running       5                8m

however when I run kubectl get pods -o wide then I see this output

pgadmin         0/1           CrashLoopBackOff         4    7m
postgres        0/1           CrashLoopBackOff         4     7  

I am not sure why I see 2 different output. when I run kubectl logs pgadmin-63545-634536 I see the following output

pgAdmin 4 - Application Initialisation
======================================

[2022-11-15 14:43:28 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2022-11-15 14:43:28 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
[2022-11-15 14:43:28 +0000] [1] [INFO] Using worker: gthread
[2022-11-15 14:43:28 +0000] [90] [INFO] Booting worker with pid: 90
[2022-11-15 14:44:24 +0000] [1] [INFO] Handling signal: term
[2022-11-15 14:44:25 +0000] [90] [INFO] Worker exiting (pid: 90)
[2022-11-15 14:44:26 +0000] [1] [INFO] Shutting down: Master


Can you please explain me what is this behavior and why my pods shutdown ? I am very new to K8s. Thanks in advance.

I try to inspect the log file

Upvotes: 1

Views: 190

Answers (1)

Blender Fox
Blender Fox

Reputation: 5625

To answer why you are seeing two different outputs, you have to understand how a container runs in Kubernetes.

In docker, a container can run and then terminate, but then stay stopped unless you tell docker you want that container to automatically restart using the --restart yes switch

In a Kubernetes deployment (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/), the --restart yes is implied -- namely, when a container (or "pod" in the Kubernetes world) exits, regardless of whether the exit was intentional or not, then Kubernetes will restart the container and will keep trying to restart it.

The exception to this is when you are running a Job (https://kubernetes.io/docs/concepts/workloads/controllers/job/) or CronJob (https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/), which will run and restart until they complete successfully and then terminate, but that is beyond the scope of your question.

You can see from the "Restarts" count in your output that the number of restarts is increasing. Kubernetes will attempt to restart the exited containers as described above, but if it detects that it is restarting it repeatedly, it will start to add a "back off" period (i.e. it will add a delay before it tries to restart the container) -- during this delay period, the status of the pod will be "CrashLoopBackOff"

To answer why this is happening, you should describe the pod using kubectl describe. For example:

kubectl describe pod --namespace {name-of-namespace} pgadmin

This will give you details of the pod and look under the Events section -- it may have some details of what happened. Most likely the Liveness Probe (https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) failed and therefore Kubernetes thinks the pods are dead, and restarts them accordingly.

Upvotes: 0

Related Questions