Svetoslav Petrov
Svetoslav Petrov

Reputation: 1198

Kubernetes deployment wipes out persistent volume

I am trying to setup a containerised application with kubernetes and I am facing an issue. When I change the image of the app and redeploy the persistent volume seems to be wiped out - for example if I do a deployment with v1.0.0 and then a new deployment with v1.0.1 . I have tried setting up the app with Deployment and StatefuSet but result is the same, check the code below.

I am fairly new to Kubernetes and any help will be appreciated.

Option 1 (StatefulSet):

kind: PersistentVolume
apiVersion: v1
metadata:
  name: test-admin-db-pv
spec:
  storageClassName: ''
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /var/lib/postgresql-test-admin-db/
---
apiVersion: v1
kind: Service
metadata:
  name: admin-node
  labels:
    app: admin-node
  annotations:
    beta.cloud.google.com/backend-config: '{"ports": {"80":"backendConfig-test"}}'
spec:
  type: NodePort
  selector:
    app: admin-node
  ports:
    - name: test-web-port
      port: 80
      targetPort: 4000
      protocol: TCP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: admin-node
spec:
  replicas: 1
  serviceName: admin-node
  selector:
    matchLabels:
      app: admin-node
  volumeClaimTemplates:
    - metadata:
        name: test-admin-db-claim
      spec:
        storageClassName: ''
        accessModes:
          - ReadWriteOnce
        volumeName: test-admin-db-pv
        resources:
          requests:
            storage: 1Gi
  template:
    metadata:
      labels:
        app: admin-node
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - image: 'postgres:10.4'
          name: admin-db
          env:
            - name: POSTGRES_DB
              value: db
            - name: POSTGRES_USER
              value: dbuser
            - name: POSTGRES_PASSWORD
              value: dbpassword
          ports:
            - containerPort: 5432
              name: admin-db
          volumeMounts:
            - name: test-admin-db-claim
              mountPath: /var/lib/postgresql
        - image: 'eu.gcr.io/my-project/my-image:v1.0.0'
          name: admin-node
          ports:
            - containerPort: 4000
              name: admin-node
          livenessProbe:
            httpGet:
              path: /api/health
              port: 4000
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 10
          readinessProbe:
            httpGet:
              path: /api/health
              port: 4000
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 10
          env:
            - name: NODE_ENV
              value: development
            - name: LOG_LEVEL
              value: info
            - name: NAMESPACE
              value: test
            - name: DB_DB
              value: db
            - name: DB_HOST
              value: 0.0.0.0
            - name: DB_PASS
              value: dbpassword
            - name: DB_USER
              value: dbuser
            - name: DB_PORT
              value: '5432'
      restartPolicy: Always

Option 2 (Deployment):

kind: PersistentVolume
apiVersion: v1
metadata:
  name: test-admin-db-pv
spec:
  storageClassName: ''
  capacity:
    storage: 1Gi
  claimRef:
    name: test-admin-db-claim
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /var/lib/postgresql-test-admin-db/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-admin-db-claim
spec:
  storageClassName: ''
  accessModes:
    - ReadWriteOnce
  volumeName: test-admin-db-pv
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
  name: admin-node
  labels:
    app: admin-node
  annotations:
    beta.cloud.google.com/backend-config: '{"ports": {"80":"backendConfig-test"}}'
spec:
  type: NodePort
  selector:
    app: admin-node
  ports:
    - name: test-web-port
      port: 80
      targetPort: 4000
      protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: admin-node
spec:
  replicas: 1
  selector:
    matchLabels:
      app: admin-node
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: admin-node
    spec:
      containers:
        - image: 'postgres:10.4'
          name: admin-db
          env:
            - name: POSTGRES_DB
              value: db
            - name: POSTGRES_USER
              value: dbuser
            - name: POSTGRES_PASSWORD
              value: dbpassword
          ports:
            - containerPort: 5432
              name: admin-db
          volumeMounts:
            - name: test-admin-db-persistent-storage
              mountPath: /var/lib/postgresql
        - image: 'eu.gcr.io/my-project/my-image:v1.0.0'
          name: admin-node
          ports:
            - containerPort: 4000
              name: admin-node
          livenessProbe:
            httpGet:
              path: /api/v1/health
              port: 4000
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 10
          readinessProbe:
            httpGet:
              path: /api/v1/health
              port: 4000
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 10
          env:
            - name: NODE_ENV
              value: development
            - name: LOG_LEVEL
              value: info
            - name: NAMESPACE
              value: test
            - name: DB_DB
              value: db
            - name: DB_HOST
              value: 0.0.0.0
            - name: DB_PASS
              value: dbpassword
            - name: DB_USER
              value: dbuser
            - name: DB_PORT
              value: '5432'
      volumes:
        - name: test-admin-db-persistent-storage
          persistentVolumeClaim:
            claimName: test-admin-db-claim
      restartPolicy: Always

Edit Since it was mentioned I am sharing the persistent volume info (the reclaim policy is already set to Retain)

Name:            test-admin-db-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    
Status:          Bound
Claim:           wipetestdep/test-admin-db-claim-admin-node-0
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:         
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/postgresql-wipetestdep-admin-db/
    HostPathType:  
Events:            <none>

Upvotes: 0

Views: 183

Answers (1)

Arghya Sadhu
Arghya Sadhu

Reputation: 44569

From the docs

PersistentVolumes can have various reclaim policies, including "Retain", "Recycle", and "Delete". For dynamically provisioned PersistentVolumes, the default reclaim policy is "Delete". This means that a dynamically provisioned volume is automatically deleted when a user deletes the corresponding PersistentVolumeClaim. This automatic behavior might be inappropriate if the volume contains precious data. In that case, it is more appropriate to use the "Retain" policy. With the "Retain" policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume is not be deleted. Instead, it is moved to the Released phase, where all of its data can be manually recovered.

Change the reclaim policy to Retain using

kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

where <your-pv-name> is the name of your chosen PersistentVolume

Upvotes: 1

Related Questions