Bob
Bob

Reputation: 8714

Kubernetes pod stays in Init phase

I am having problems with some pod staying in init phase all the time.

I do not see any errors when I run the pod describe command. This is the list of the events:

Events:
  Type     Reason     Age              From                                                   Message
  ----     ------     ----             ----                                                   -------
  Normal   Scheduled  5m               default-scheduler                                      Successfully assigned infrastructure/jenkins-74cc957b47-mxvqd to ip-XX-XX-XXX-XXX.eu-west-1.compute.internal
  Warning  BackOff    3m (x3 over 4m)  kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal  Back-off restarting failed container
  Normal   Pulling    3m (x4 over 5m)  kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal  pulling image "jenkins/jenkins:lts"
  Normal   Pulled     3m (x4 over 5m)  kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal  Successfully pulled image "jenkins/jenkins:lts"
  Normal   Created    3m (x4 over 5m)  kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal  Created container
  Normal   Started    3m (x4 over 5m)  kubelet, ip-XX-XX-XXX-XXX.eu-west-1.compute.internal  Started container

I can see this also:

  State:          Running
      Started:      Wed, 23 Sep 2020 09:49:56 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 23 Sep 2020 09:49:06 +0200
      Finished:     Wed, 23 Sep 2020 09:49:27 +0200
    Ready:          False
    Restart Count:  3

If I list the pods, it looks like this: Error from server (BadRequest): container "jenkins" in pod "jenkins-74cc957b47-mxvqd" is waiting to start: PodInitializing

But I am not able to see the specific error. Can someone help?

Upvotes: 2

Views: 5395

Answers (3)

Wytrzymały Wiktor
Wytrzymały Wiktor

Reputation: 13898

The official documentation has several recommendations regarding Debug Running Pods:

  • Examining pod logs: by executing kubectl logs ${POD_NAME} ${CONTAINER_NAME} or kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME} if your container has previously crashed

  • Debugging with container exec: run commands inside a specific container with kubectl exec: kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}

  • Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities. You can find an example here.

  • Debugging via a shell on the node: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.

You can find more details in the linked documentation.

Upvotes: 4

ckaserer
ckaserer

Reputation: 5722

If you can't access the logs you can run the container as an interactive pod an follow the logs directly via

kubectl run --rm -it jenkins --image=jenkins/jenkins:lts -n YOURNAMESPACE

--rm ... delete the pod after it terminates

-it ... enable interactive and tty support

-n ... specify the target namespace

--image ... specify the image to use

Upvotes: 0

Bob
Bob

Reputation: 8714

This is how I revealed the logs from the broken instance

kubectl logs jenkins-df87c46d5-52dtt -c copy-default-config -n infrastructure > debug1.log

Upvotes: 0

Related Questions