Reputation: 2537
I am new to all things Kubernetes so still have much to learn.
Have created a two node Kubernetes cluster and both nodes (master and worker) are ready to do work which is good:
[monkey@k8s-dp1 nginx-test]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-dp1 Ready master 2h v1.9.1
k8s-dp2 Ready <none> 2h v1.9.1
Also, all Kubernetes Pods look okay:
[monkey@k8s-dp1 nginx-test]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-dp1 1/1 Running 0 2h
kube-system kube-apiserver-k8s-dp1 1/1 Running 0 2h
kube-system kube-controller-manager-k8s-dp1 1/1 Running 0 2h
kube-system kube-dns-86cc76f8d-9jh2w 3/3 Running 0 2h
kube-system kube-proxy-65mtx 1/1 Running 1 2h
kube-system kube-proxy-wkkdm 1/1 Running 0 2h
kube-system kube-scheduler-k8s-dp1 1/1 Running 0 2h
kube-system weave-net-6sbbn 2/2 Running 0 2h
kube-system weave-net-hdv9b 2/2 Running 3 2h
However, if I try to create a new deployment in the cluster, the deployment gets created but its pod fails to go into the appropriate RUNNING state. e.g.
[monkey@k8s-dp1 nginx-test]# kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml
deployment "nginx-deployment" created
[monkey@k8s-dp1 nginx-test]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-569477d6d8-f42pz 0/1 ContainerCreating 0 5s
default nginx-deployment-569477d6d8-spjqk 0/1 ContainerCreating 0 5s
kube-system etcd-k8s-dp1 1/1 Running 0 3h
kube-system kube-apiserver-k8s-dp1 1/1 Running 0 3h
kube-system kube-controller-manager-k8s-dp1 1/1 Running 0 3h
kube-system kube-dns-86cc76f8d-9jh2w 3/3 Running 0 3h
kube-system kube-proxy-65mtx 1/1 Running 1 2h
kube-system kube-proxy-wkkdm 1/1 Running 0 3h
kube-system kube-scheduler-k8s-dp1 1/1 Running 0 3h
kube-system weave-net-6sbbn 2/2 Running 0 2h
kube-system weave-net-hdv9b 2/2 Running 3 2h
I am not sure how to figure out what the problem is but if I for example do a kubectl get ev
, I can see the following suspect event:
<invalid> <invalid> 1 nginx-deployment-569477d6d8-f42pz.15087c66386edf5d Pod
Warning FailedCreatePodSandBox kubelet, k8s-dp2 Failed create pod sandbox.
But I don't know where to go from here. I can also see that the nginx docker image itself never appears in docker images
.
How do I find out more about the problem? Am I missing something fundamental in the kubernetes setup?
--- NEW INFO ---
For background info in case it helps...
Kubernetes nodes are running on CentOS 7 VMs hosted on Windows 10 hyper-v.
--- NEW INFO ---
Running kubectl describe pods
shows the following Warning:
Warning NetworkNotReady 1m kubelet, k8s-dp2 network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
--- NEW INFO ---
Switched off the Hyper-v VMs running Kubernetes for the night after my day job hours were over and on my return to the office this morning, I powered up the Kubernetes VMs once again to carry on and, for about 15 mins, the command:
kubectl get pods --all-namespaces
was still showing ContainerCreating
for those nginx pods the same as yesterday but, right now, the command is now showing all pods as Running
including the nginx pods... i.e. the problem solved itself after a full reboot of both master and worker node VMs.
I now did another full reboot again and all pods are showing as Running which is good.
Upvotes: 41
Views: 119935
Reputation: 161
I was facing the same issue When I list the pods some were in ContainerCreating status,there could be below issues which would be visible in describe command. Reasons:- image pulling issue (or secret is not present) / configmap is not available etc.
reasons can be visible in below 2 commands.
kubectl describe pod -n namespace
systemctl status kubelet (here you will get all connection errors with repo)
usually this issue get due to image pull interruption.
so restart the below 2 sevrices in sequences.
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl reload docker
sudo systemctl restart kubelet (here we get all live connection logs)
Hope it will help.
Upvotes: 0
Reputation: 3188
Just sharing that this command helped a lot to find out my problem with ContainerCreating Status:
kubectl get events --sort-by=.metadata.creationTimestamp
Upvotes: 13
Reputation: 81
Had same issue but problem with my side was that cluster took too much time to pull the image, may a quick cluster restart can help to make the process faster
Upvotes: 0
Reputation: 11
You can run kubectl describe
command on the deployment to be sure as to the events going on or you can run the describe
command on the pods that the deployment is spinning up.
Sometimes you may not be having enough resources in your cluster. Check what you using kubectl top
command on the pods running to see if one of them is exhausting all of your resources.
I hope this is helpful enough
Upvotes: 1
Reputation: 5117
In my case it was due to missing Secret or say ConfigMap in deployments namespace
Upvotes: 2
Reputation: 638
You can delete de pod, it will be recreated automatically.
kubectl delete pod -n namespace podname
Upvotes: 4
Reputation: 11
I was facing the same issue yesterday. When I describe those pods in ContainerCreating status, the problem was with CNI, it was failing and pods stays in ContainerCreating status. So I delete the CNI from the controlplane and redeploy it. All the pods will change its status within a minute to running status.
Upvotes: 1
Reputation: 991
Using kubectl describe pod
would show all the events. In some cases, the deployment might be still pulling the docker images from remote, so the status would be still shown as ContainerCreating
Upvotes: 13
Reputation: 2537
Doing a full reboot of both VMs that are running the Kubernetes master node and Kubernetes worker node got the Pods to all show as Running
(NOTE: After first reboot, it took about 15-20 mins for the pods in question to go into a Running
state and, on subsequent reboot, the pods in question went into Running
state relatively much quicker... 3-5 mins).
Upvotes: 14