Reputation: 1681
The stateful set es-data was failing on our test environment and I was asked to delete corresponding PV.
So I deleted the following for es-data: 1) PVC 2) PV They showed as terminating and was left for the weekend. Upon arriving this morning they still showed as terminating so deleted both PVC and PV forcefully. No joy. To fix the whole thing I had to delete the stateful set.
Is this correct if you wanted to delete the PV?
Upvotes: 50
Views: 123864
Reputation: 1276
For me I followed this method and it worked fine for me.
kubectl delete pv {your-pv-name} --grace-period=0 --force
After that edit the pvc configuration.
kubectl edit pvc {your-pvc-name}
and remove finalizer from pvc configuration.
finalizers:
- kubernetes.io/pv-protection
You can read more about finalizer here in official kubernetes guide.
Upvotes: 3
Reputation: 893
HINT: PV volumes may be described like pvc-name-of-volume
which may be confusing!
Persistent Volume
Persistent Volume Clame
First find the pvs: kubectl get pv -n {namespace}
Then delete the pv in order set status to Terminating
kubectl delete pv {PV_NAME}
Then patch it to set the status of pvc to Lost
:
kubectl patch pv {PV_NAME} -p '{"metadata":{"finalizers":null}}'
Then get pvc volumes: kubectl get pvc -n storage
Then you can delete the pvc:
kubectl delete pvc {PVC_NAME} -n {namespace}
** Lets say we have kafka installed in storage namespace
$ kubectl get pv -n storage
$ kubectl delete pv pvc-ccdfe297-44c9-4ca7-b44c-415720f428d1
$ kubectl get pv -n storage
(hanging but turns pv status to terminating)
$ kubectl patch pv pvc-ccdfe297-44c9-4ca7-b44c-415720f428d1 -p '{"metadata":{"finalizers":null}}'
$ kubectl get pvc -n storage
kubectl delete pvc data-kafka-0 -n storage
Upvotes: 5
Reputation: 8684
Most answers on this thread simply mention the commands without explaining the root cause.
Here is a diagram to help understand better. refer to my other answer for commands and additional info -> https://stackoverflow.com/a/73534207/6563567
This diagram shows how to clean delete a volume
In your case, the PVC and PV are stuck in terminating state because of finalizers. Finalizers are guard rails in k8s to avoid accidental deletion of resources.
Your observations are correct and this is how Kubernetes works. But the order in which you deleted the resources are incorrect.
This is what happened,
PV is stuck terminating because PVC still exists. PVC is stuck terminating because Statefulsets(pods) are still using the volumes. (volumes are attached to the nodes and mounted to the pods). As soon as you deleted the pods/STS, since volumes are no more in use, PVC and PV got successfully removed.
Upvotes: 13
Reputation:
At the beginning be sure that your Reclaim Policy
is set up to Delete
. After PVC is deleted, PV should be deleted.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
If it doesn't help, please check this [closed] Kubernetes PV issue: https://github.com/kubernetes/kubernetes/issues/69697
and try to delete the PV finalizers.
Upvotes: 7
Reputation: 1079
It worked for me if I first delete the pvc, then the pv
kubectl delete pvc data-p-0
kubectl delete pv <pv-name> --grace-period=0 --force
Assuming one wants to delete the pvc as well, seems to hang otherwise
Upvotes: 20
Reputation: 121
Firstly run kubectl patch pv {PVC_NAME} -p '{"metadata":{"finalizers":null}}'
then run kubectl delete pv {PVC_NAME}
Upvotes: 12
Reputation: 30083
kubectl delete pv [pv-name]
ksu you have to check about the policy of PV it should not be Reclaim Policy.
Upvotes: 0
Reputation: 13443
You can delete the PV using following two commands:
kubectl delete pv <pv_name> --grace-period=0 --force
And then deleting the finalizer using:
kubectl patch pv <pv_name> -p '{"metadata": {"finalizers": null}}'
Upvotes: 81