Reputation: 63
We are trying to use kubernetes persistent volume mapped with pod/container directory to have as backup. Container directory (/home) already have data from dockerimage but when we mount kubernetes persistent volume with container directory(/home) the container data get override or vanishes.
How to make kubernetes persistent volume not to override data of container and only amend any data with pre-existing data?
cat pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/xyz/dock/main/kube/storage"
cat pvclaim.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Below is main manifest file which deploys pods with persistent volume
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
- name: cgroup
hostPath:
path: /sys/fs/cgroup
type: Directory
containers:
- name: rbf-container
image: 10.190.205.11:5000/myimage/ubuntu:1.0
securityContext:
privileged: true
volumeMounts:
- mountPath: "/home/xyz" ##-> mounting persistent volume with container directory /home/xyz**
name: task-pv-storage
- mountPath: /sys/fs/cgroup
name: cgroup
output with kubernetes persistent volume:
$ ssh 10.244.4.29
Failed to add the host to the list of known hosts (/home/xyz/.ssh/known_hosts).
[email protected]'s password:
Last login: Tue Aug 25 11:16:48 2020 from 10.252.85.167
$ bash
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
xyz@mypod:~$ ls
xyz@mypod:~$ ls -l
total 0 ##--> no data present it all get vanished
xyz@mypod:~$ pwd
/home/xyz
Output from pod without persistent volume
$ ssh 10.244.4.29
Failed to add the host to the list of known hosts (/home/xyz/.ssh/known_hosts).
[email protected]'s password:
Last login: Tue Aug 25 11:16:48 2020 from 10.252.85.167
$ bash
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
xyz@mypod:~$ ls
xyz@mypod:~$ ls -l
total 465780
drwxrwxrwx 1 xyz xyz 4096 Aug 13 12:44 Desktop
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Documents
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Downloads
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Music
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Pictures
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Public
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Templates
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Videos
-rw------- 1 xyz xyz 2404352 Aug 25 11:12 core
drwx------ 4 root root 4096 Aug 10 08:39 local.bak
-rw-r--r-- 1 root root 474439680 Aug 10 08:35 local.tar
As you can see data from dockerimage is available without using persistent volume
Upvotes: 6
Views: 2986
Reputation: 170
I'm only at the beginning of my Kubernetes journey and still trying to grasp the best practices, but I think what you're trying to do is not possible (in the case of /home to /home). AFAIK the data is not overwritten, but exists "under" the mount. You can try this yourself by mounting a folder with mount --bind folder_a folder_b
and then unmounting the folder with umount folder_b
. Creating the mount hides the files in folder_b, but unmounting makes them appear again.
So that would mean that you just need to initialize the volume by copying the files to the volume. You can achieve this with kubectl exec <pod> -- <command>
, or by running a job, or with initContainers, or by using an entrypoint shell script. Since copying files over and over again to the mounted volume is waste of energy, I'd think that first two options are preferred.
Upvotes: 3