userNB13
userNB13

Reputation: 113

Pod's status is always ContainerCreating. . Events show 'Failed create pod sandbox'

I am trying to create a deployment on a K8s cluster with one master and two worker nodes. The cluster is running on 3 AWS EC2 instances. I have been using this environment for quite sometime to play with Kubernetes. Three days back, I have started to see all the pods status to change to ContainerCreating from Running. Only the pods that are scheduled on master are shown as Running. The pods running on worker nodes are shown as ContainerCreating. When I run kubectl describe pod <podname>, it shows in the event the following

 Events:
  Type     Reason                  Age   From                      Message
  ----     ------                  ----  ----                      -------
  Normal   Scheduled               34s   default-scheduler         Successfully assigned nginx-8586cf59-5h2dp to ip-172-31-20-57
  Normal   SuccessfulMountVolume   34s   kubelet, ip-172-31-20-57  MountVolume.SetUp succeeded for volume "default-token-wz7rs"
  Warning  FailedCreatePodSandBox  4s    kubelet, ip-172-31-20-57  Failed create pod sandbox.
  Normal   SandboxChanged          3s    kubelet, ip-172-31-20-57  Pod sandbox changed, it will be killed and re-created.

This error has been bugging me now. I tried to search around online on related error but I couldn't get anything specific. I did kubeadm reset on the cluster including master and worker nodes and brought up the cluster again. The nodes status shows ready. But I run into the same problem again whenever I try to create a deployment using the below command for example:

kubectl run nginx --image=nginx --replicas=2

Upvotes: 11

Views: 18142

Answers (4)

Noumenon
Noumenon

Reputation: 6442

I had this happen when I told my launch template it could use a transit gateway's subnet as an option. Instances that picked the wrong subnet caused one of my CoreDNS pods to get this error.

Upvotes: 0

John Datserakis
John Datserakis

Reputation: 970

I run k8s on a few DO droplets and was stuck on this very issue. No other info was given - just the FailedCreatePodSandBox complaining about a file I had never seen before.

Spent a lotta time trying to figure it out - the only thing that fixed the issue for me was restarting my master and each node in their entirety. That got things going instantly.

sudo shutdown -r now

Upvotes: 0

Jonathan
Jonathan

Reputation: 830

This can occur if you specify a limit or request on memory and use the wrong unit.

Below triggered the message:

resources:
   limits:
      cpu: "300m"
     memory: "256m"
   requests:
     cpu: "50m"
     memory: "64m"

The correct line would be:

resources:
   limits:
      cpu: "300m"
     memory: "256Mi"
   requests:
     cpu: "50m"
     memory: "64Mi"

Upvotes: 19

frbl
frbl

Reputation: 1292

It might someone else, but I've spent a weekend on this until I noticed I had requested 1000 mem, insted of 1000Mi...

Upvotes: 1

Related Questions