Reputation: 4784
I am using helm to deploy StatefulSet, below is yaml
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: {{ .Values.database.mongo.storageClassName }}
labels:
for: for-mongo-statefulset
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
reclaimPolicy: Retain
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: {{ .Values.database.mongo.serviceName }}
replicas: {{ .Values.database.mongo.replicas }}
template:
metadata:
labels:
role: mongo
environment: prod
spec:
serviceAccountName: {{ .Values.serviceAccount }}
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--bind_ip"
- 0.0.0.0
- "--replSet"
- {{ .Values.database.mongo.replicaSet }}
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: {{ .Values.database.mongo.port }}
volumeMounts:
- name: {{ .Values.database.mongo.storageName }}
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=prod"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: {{ .Values.database.mongo.serviceName }}
volumeClaimTemplates:
- metadata:
name: {{ .Values.database.mongo.storageName }}
spec:
storageClassName: {{ .Values.database.mongo.storageClassName }}
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
On helm install . -n release-name
it creates StorageClass
, PersistentVolume
& PersistentVolumeClaim
.
If I delete the release helm delete release-name --purge
it keeps pv and pvc which is fine. But it deletes the StorageClass
even though I have specified reclaimPolicy: Retain
on sc.
Is this expected behaviour?
Helm version
Client: v2.10.0+g9ad53aa
Server: v2.10.0+g9ad53aa
Kubernetes version
Client Version: v1.11.1
Server Version: v1.9.7-gke.5
Update
I assumed reclaimPolicy
was for both StorageClass and PV/PVC, Thanks to @Pablo for clearing my understanding regarding reclaimPolicy
Persistent Volumes that are dynamically created by a storage class will have the reclaim policy specified in the reclaimPolicy field of the class, which can be either Delete or Retain. If no
reclaimPolicy
is specified when a StorageClass object is created, it will default to Delete
Is there anything similar to reclaimPolicy
which will tell helm/kubernetes to not delete StorageClass
when performing helm delete release-name --purge
?
Upvotes: 1
Views: 3790
Reputation: 8801
The annotation "helm.sh/resource-policy": keep
instructs Tiller to skip this resource during a helm delete
operation. However, this resource becomes orphaned. Helm will no longer manage it in any way. This can lead to problems if using helm install —replace
on a release that has already been deleted, but has kept resources.
To explicitly opt into resource deletion, for example when overriding a chart’s default annotations, set the resource policy annotation value to delete
.
Upvotes: 1
Reputation: 48
Try setting Delete on StorageClass reclaimPolicy
https://kubernetes.io/docs/concepts/storage/storage-classes/#reclaim-policy
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: {{ .Values.database.mongo.storageClassName }}
labels:
for: for-mongo-statefulset
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
reclaimPolicy: Delete
Upvotes: 1
Reputation: 1145
The reclaim policy listed in the StorageClass object is used for the persistent volumes not the storage class it self. Meaning that the pvs and pvcs that are created using that storage class will inherit the reclaim policy set in the storage class.
You can find more info on that here: https://kubernetes.io/docs/concepts/storage/storage-classes/
Upvotes: 3