Ondra Žižka
Ondra Žižka

Reputation: 46796

Kubernetes: how to do dynamic PersistentVolumeClaim with persistentVolumeReclaimPolicy: Reclaim

I have a dynamic PersistentVolume provisioned using PersistentVolumeClaim.

I would like to keep the PV after the pod is done. So I would like to have what persistentVolumeReclaimPolicy: Retain does.

However, that is applicable to PersistentVolume, not PersistentVolumeClaim (AFAIK).

How can I change this behavior for dynamically provisioned PV's?

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
    name: {{ .Release.Name }}-pvc
spec:
    accessModes:
      - ReadWriteOnce
    storageClassName: gp2
    resources:
        requests:
            storage: 6Gi

---
kind: Pod
apiVersion: v1
metadata:
    name: "{{ .Release.Name }}-gatling-test"
spec:
    restartPolicy: Never
    containers:
      - name: {{ .Release.Name }}-gatling-test
        image: ".../services-api-mvn-builder:latest"
        command: ["sh", "-c", 'mvn -B gatling:test -pl csa-testing -DCSA_SERVER={{ template "project.fullname" . }} -DCSA_PORT={{ .Values.service.appPort }}']
        volumeMounts:
          - name: "{{ .Release.Name }}-test-res"
            mountPath: "/tmp/testResults"

    volumes:
      - name: "{{ .Release.Name }}-test-res"
        persistentVolumeClaim:
          claimName: "{{ .Release.Name }}-pvc"
          #persistentVolumeReclaimPolicy: Retain  ???

Upvotes: 23

Views: 21523

Answers (4)

Peter V. Mørch
Peter V. Mørch

Reputation: 15907

This is not the answer to the OP, but the answer to the personal itch that led me here is that I don't need reclaimPolicy: Retain at all. I need a StatefulSet instead. Read on if this is for you:

My itch was to have a PersistentVolume that got re-used over and over by the container in a persistent way; the way that is the default behavior when using docker and docker-compose volumes. So that a new PersistentVolume only gets created once:

# Create a new PersistentVolume the very first time
kubectl apply  -f my.yaml 

# This leaves the "volume" - the PersistentVolume - alone
kubectl delete -f my.yaml

# Second and subsequent times re-use the same PersistentVolume
kubectl apply  -f my.yaml 

And I thought the way to do that was to declare a PersistentVolumeClaim with reclaimPolicy: Retain and then reference that in my deployment. But even when i got reclaimPolicy: Retain working, a brand new PersistentVolume still got created on every kubectl apply. reclaimPolicy: Retain just ensured that the old ones didn't get deleted.

But no. The way to achieve this use-case is with a StatefulSet. It is way simpler, and then it behaves like I'm used to with docker and docker-compose.

Upvotes: 13

will
will

Reputation: 101

you can config it in pv.yaml or storageclass.yaml or take a patch to exit pv

pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /tmp
    server: 172.17.0.2

storageclass.yaml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2-retain
  annotations:
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4 
reclaimPolicy: Retain

take a patch

kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

Upvotes: 7

Tummala Dhanvi
Tummala Dhanvi

Reputation: 3380

Workaround would be to create new StorageClass with reclaimPolicy: Retain and use that StorageClass every where.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2-retain
  annotations:
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4 
reclaimPolicy: Retain

PS: The reclaimPolicy of the existing StorageClass can't edited, but you can delete the StorageClass and recreate it with reclaimPolicy: Retain

Upvotes: 10

Anton Kostenko
Anton Kostenko

Reputation: 8983

There is an issue on Kubernetes Github about Reclaim Policy of dynamically provisioned volumes.

A short answer is "no" - you cannot set the policy.

Here is the small quote from the dialogue in the ticket on how to avoid the PV deletion:

Speedline: Stumbled upon this and I'm going through a similar issue. I want to create an Elasticsearch cluster but make sure that if the cluster goes down for whatever reason, the data stored on the persistent disks get maintained across the restart. I currently have one a PersistentVolumeClaim for each of the deployment of elasticsearch that carries data.

jsafrane: @speedplane: it is maintained as long as you don't delete the PVC. Reclaim policy is executed only if kuberenetes sees a PV that was bound to a PVC and the PVC does not exist.

@jsafrane okay, got it. So just have to be careful with the PVCs, deleting one is like deleting all the data on the disk.

Upvotes: 5

Related Questions