Reputation: 3551
How do I force delete Namespaces stuck in Terminating?
apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
finalizers:
- foregroundDeletion
kubectl delete ns delete-me
It is not possible to delete delete-me
.
The only workaround I've found is to destroy and recreate the entire cluster.
None of these work or modify the Namespace. After any of these the problematic finalizer still exists.
kubectl apply
Apply:
apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
finalizers:
$ kubectl apply -f tmp.yaml
namespace/delete-me configured
The command finishes with no error, but the Namespace is not udpated.
The below YAML has the same result:
apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
kubectl edit
kubectl edit ns delete-me
, and remove the finalizer. Ditto removing the list entirely. Ditto removing spec
. Ditto replacing finalizers
with an empty list.
$ kubectl edit ns delete-me
namespace/delete-me edited
This shows no error message but does not update the Namespace. kubectl edit
ing the object again shows the finalizer still there.
kubectl proxy &
kubectl proxy &
curl -k -H "Content-Type: application/yaml" -X PUT --data-binary @tmp.yaml http://127.0.0.1:8001/api/v1/namespaces/delete-me/finalize
As above, this exits successfully but does nothing.
kubectl delete ns delete-me --force --grace-period=0
This actually results in an error:
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
Error from server (Conflict): Operation cannot be fulfilled on namespaces "delete-me": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.
However, it doesn't actually do anything.
In the test cluster I set up to debug this issue, I've been waiting over a week. Even if the Namespace might eventually decide to be deleted, I need it to be deleted faster than a week.
The Namespace is empty.
$ kubectl get -n delete-me all
No resources found.
etcdctl
$ etcdctl --endpoint=http://127.0.0.1:8001 rm /namespaces/delete-me
Error: 0: () [0]
I'm pretty sure that's an error, but I have no idea how to interpret that. It also doesn't work. Also tried with --dir
and -r
.
ctron/kill-kube-ns
There is a script for force deleting Namespaces. This also does not work.
$ ./kill-kube-ns delete-me
Killed namespace: delete-me
$ kubectl get ns delete-me
NAME STATUS AGE
delete-me Terminating 1h
POST
ing the edited resource to /finalizeReturns a 405. I'm not sure if this is the canonical way to POST to /finalize though.
This appears to be a recurring problem and none of these resources helped.
Upvotes: 47
Views: 70040
Reputation: 41
I just did below 2 steps and it worked like a charm. Before this, I had tried many ways which all gone in vain.
Step-1
In 1st terminal run kubectl proxy
Step-2
In 2nd terminal run below command, make sure you have jq installed in the system. Just replace istio-system by your namespace which you wish to delete and in Terminating state.
kubectl get ns istio-system -o json | \
jq '.spec.finalizers=[]' | \
curl -X PUT http://localhost:8001/api/v1/namespaces/istio-system/finalize -H "Content-Type: application/json" --data @-
Upvotes: 0
Reputation: 199
In case you are not able to run the command and are getting any parsing error:
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/[your-namespace]/finalize
Then make sure that the json that you have is having correct finalize like below, note that finalize have [] :
"spec": {
"finalizers":[
]
Upvotes: 0
Reputation: 11
I had the same issue in a rke2/Harvester cluster. I tried everything (curl the k8s API, forced delete etc.)
There where no resource left in the namespace.
Lastly i removed the key from etcd:
ETCDCTL_API=3 etcdctl \
--cert=/var/lib/rancher/rke2/server/tls/etcd/client.crt \
--key=/var/lib/rancher/rke2/server/tls/etcd/client.key \
--cacert=/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt \
del /registry/namespaces/<NAMESPACE_TO_DELETE>
Certainly not the best method to solve the issue. But may be the last resort.
Upvotes: 1
Reputation: 171
Here is a modification to the command provided by the user Ushakov Roman (detailed explanation see there). Compared to Ushakov Roman's solution, defining the namespace
variable in the beginning helps to reduce the number of places where the namespace name actually has to be typed:
namespace=<NAME_OF_NAMESPACE> && kubectl get ns $namespace -o json | jq '.spec.finalizers = []' | kubectl replace --raw "/api/v1/namespaces/$namespace/finalize" -f -
Upvotes: 17
Reputation: 401
Applying this command after replacing the two occurrences of <NAME_OF_NAMESPACE>
with the actual name of the namespace in Termination can solve the issue:
kubectl get ns <NAME_OF_NAMESPACE> -o json | jq '.spec.finalizers = []' | kubectl replace --raw "/api/v1/namespaces/<NAME_OF_NAMESPACE>/finalize" -f -
Most replies seem to do the same thing here; remove the finalizers from the namespace. In this case, this is done in three steps:
kubectl get ns <NAME_OF_NAMESPACE> -o json
returns the namespace configuration in json format. This is piped into the next command:jq '.spec.finalizers = []'
removes all finalizers from the json configuration. The resulting json (without finalizers) is then piped into the next command:kubectl replace --raw "/api/v1/namespaces/<NAME_OF_NAMESPACE>/finalize" -f -
, injects the updated json namespace configuration (without finalizers) into k8s.Upvotes: 30
Reputation: 3201
This article brought me here for a second time, but this time we are using Rancher (RKE2) Kubernetes edition.
The trick here is to call directly rancher api in order to pass deletion request... ( trick with proxy does not work). Hope this help someone
Don't forget to change the Bearer Token
export NS='delete-me';
export YOURFQDN='rancher2.dev.prod.local';
export YOURCLUSTER='c-xxxx1';
kubectl get ns ${NS} -o json | jq '.spec.finalizers=[]' | \
curl -X PUT https://${YOURFQDN}/k8s/clusters/${YOURCLUSTER}/api/v1/namespaces/${NS}/finalize \
-H "Accept: application/json" \
-H "Authorization: Bearer token-xxxx:xxxxYOURxxxxTOKENxxxx" \
-H "Content-Type: application/json" --data @-
Upvotes: 2
Reputation: 4458
I loved this answer extracted from here
In one terminal:
kubectl proxy
In another terminal:
kubectl get ns delete-me -o json | \
jq '.spec.finalizers=[]' | \
curl -X PUT http://localhost:8001/api/v1/namespaces/delete-me/finalize -H "Content-Type: application/json" --data @-
Upvotes: 39
Reputation: 3551
The kubectl proxy
try is almost correct, but not quite. It's possible using JSON instead of YAML does the trick, but I'm not certain.
The JSON with an empty finalizers list:
~$ cat ns.json
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "delete-me"
},
"spec": {
"finalizers": []
}
}
Use curl
to PUT
the object without the problematic finalizer.
~$ curl -k -H "Content-Type: application/json" -X PUT --data-binary @ns.json http://127.0.0.1:8007/api/v1/namespaces/delete-me/finalize
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "delete-me",
"selfLink": "/api/v1/namespaces/delete-me/finalize",
"uid": "0df02f91-6782-11e9-8beb-42010a800137",
"resourceVersion": "39047",
"creationTimestamp": "2019-04-25T17:46:28Z",
"deletionTimestamp": "2019-04-25T17:46:31Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"delete-me\"},\"spec\":{\"finalizers\":[\"foregroundDeletion\"]}}\n"
}
},
"spec": {
},
"status": {
"phase": "Terminating"
}
}
The Namespace is deleted!
~$ kubectl get ns delete-me
Error from server (NotFound): namespaces "delete-me" not found
Upvotes: 40