David Prifti
David Prifti

Reputation: 711

Multiple Persistent Volumes with the same mount path Kubernetes

I have created 3 CronJobs in Kubernetes. The format is exactly the same for every one of them except the names. These are the following specs:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: test-job-1 # for others it's test-job-2 and test-job-3
  namespace: cron-test
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: OnFailure
          containers:
          - name: test-job-1 # for others it's test-job-2 and test-job-3
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - "/bin/sh"
            - "-c"
            args:
            - cd database-backup && touch $(date +%Y-%m-%d:%H:%M).test-job-1 && ls -la # for others the filename includes test-job-2 and test-job-3 respectively
            volumeMounts:
            - mountPath: "/database-backup"
              name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
          volumes:
          - name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
            persistentVolumeClaim:
              claimName: test-job-1-pvc # for others it's test-job-2-pvc and test-job-3-pvc

And also the following Persistent Volume Claims and Persistent Volume:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-job-1-pvc # for others it's test-job-2-pvc or test-job-3-pvc
  namespace: cron-test
spec:
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  resources:
    requests:
      storage: 1Gi
  volumeName: test-job-1-pv # depending on the name it's test-job-2-pv or test-job-3-pv
  storageClassName: manual
  volumeMode: Filesystem
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
  namespace: cron-test
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/database-backup"

So all in all there are 3 CronJobs, 3 PersistentVolumes and 3 PersistentVolumeClaims. I can see that the PersistentVolumeClaims and PersistentVolumes are bound correctly to each other. So test-job-1-pvc <--> test-job-1-pv, test-job-2-pvc <--> test-job-2-pv and so on. Also the pods associated with each PVC are are the corresponding pods created by each CronJob. For example test-job-1-1609066800-95d4m <--> test-job-1-pvc and so on. After letting the cron jobs run for a bit I create another pod with the following specs to inspect test-job-1-pvc:

apiVersion: v1
kind: Pod
metadata:
  name: data-access
  namespace: cron-test
spec:
  containers:
    - name: data-access
      image: busybox
      command: ["sleep", "infinity"]
      volumeMounts:
        - name: data-access-volume
          mountPath: /database-backup
  volumes:
    - name: data-access-volume
      persistentVolumeClaim:
        claimName: test-job-1-pvc

Just a simple pod that keeps running all the time. When I get inside that pod with exec and see inside the /database-backup directory I see all the files created from all the pods created by the 3 CronJobs.

What I exepected to see?

I expected to see only the files created by test-job-1.

Is this something expected to happen? And if so how can you separate the PersistentVolumes to avoid something like this?

Upvotes: 0

Views: 2221

Answers (1)

timsmelik
timsmelik

Reputation: 742

I suspect this is caused by the PersistentVolume definition: if you really only changed the name, all volumes are mapped to the same folder on the host.

 hostPath:
    path: "/database-backup"

Try giving each volume a unique folder, e.g.

 hostPath:
    path: "/database-backup/volume1"

Upvotes: 2

Related Questions