Héctor
Héctor

Reputation: 26084

"Waiting for tearing down pods" when Kubernetes turns down

I have a Kubernetes cluster installed in my Ubuntu machines. It consists of three machines: one master/node and two nodes.

When I turn down the cluster, it never stops printing "waiting for tearing down pods":

root@kubernetes01:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh
Bringing down cluster using provider: ubuntu
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
No resources found
No resources found
service "kubernetes" deleted
No resources found
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods
waiting for tearing down pods

There is no pods nor services running when I turn it down. Finally, I have to force stop by killing processes and stoping services.

Upvotes: 1

Views: 757

Answers (2)

Nilesh Suryavanshi
Nilesh Suryavanshi

Reputation: 183

First we have to find out which rc is running :

kubectl get rc --namespace=kube-system

We have to delete Running rc :

kubectl delete rc above_running_rc_name --namespace=kube-system

Then cluster down script "KUBERNETES_PROVIDER=ubuntu ./kube-down.sh", will execute without Error "waiting for tearing down pods"

EXAMPLE ::

root@ubuntu:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh

Bringing down cluster using provider: ubuntu Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)

No resources found No resources found service "kubernetes" deleted No resources found

waiting for tearing down pods

waiting for tearing down pods

^C

root@ubuntu:~/kubernetes/cluster# kubectl get rc --namespace=kube-system CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE kubernetes-dashboard-v1.0.1 kubernetes-dashboard gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1 k8s-app=kubernetes-dashboard 1 44m root@ubuntu:~/kubernetes/cluster#

root@ubuntu:~/kubernetes/cluster# kubectl delete rc kubernetes-dashboard-v1.0.1 --namespace=kube-system replicationcontroller "kubernetes-dashboard-v1.0.1" deleted

root@ubuntu:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh

Bringing down cluster using provider: ubuntu Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)

No resources found No resources found service "kubernetes" deleted No resources found Cleaning on master 172.27.59.208 26979

etcd stop/waiting Connection to 172.27.59.208 closed. Connection to 172.27.59.208 closed. Connection to 172.27.59.208 closed. Cleaning on node 172.27.59.233 2165 flanneld stop/waiting

Connection to 172.27.59.233 closed.

Connection to 172.27.59.233 closed.

Done

Upvotes: 6

Nikhil Jindal
Nikhil Jindal

Reputation: 1123

You can find out which pods is it waiting for by running:

kubectl get pods --show-all --all-namespaces

Thats what the code runs: https://github.com/kubernetes/kubernetes/blob/1c80864913e4b9da957c45eef005b06dba68cec3/cluster/ubuntu/util.sh#L689

Upvotes: 2

Related Questions