Reputation: 2993
I'd like to set up a Persistent Volume (PV and PVC) shared by pods in a kind cluster. However, I need to ensure that the data is persisted on my laptop (host server) as well. Therefore, the volume's path should be on my laptop that I can directly access.
If I delete the kind cluster
, the volume should be persisted and not destroyed.
I also want to be able to easily add, update, or copy files from that volume on my host laptop.
How can I make the pods in the Kind cluster
aware of this setup?
Please find my kind.yaml
file for your reference.
$ kind
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
Upvotes: 17
Views: 24386
Reputation: 1437
To set up a Persistent Volume (PV)
and Persistent Volume Claim (PVC)
shared by pods in a KIND cluster and keep the data persisted on your laptop, you can follow these steps:
Create a directory on your laptop that will serve as the PV. Create a YAML file for the PV and PVC, specifying the path to the directory on your laptop as the source of the PV. Apply the YAML file to your KIND cluster to create the PV and PVC. In your pod specification, refer to the PVC by its name to mount the volume.
Here's an example of a YAML file for the PV and PVC:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/your/laptop/directory
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
To apply the YAML file, run:
kubectl apply -f pv-pvc.yaml
In your pod specification, refer to the PVC by its name to mount the volume:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: myimage
volumeMounts:
- name: mypv
mountPath: /path/in/container
volumes:
- name: mypv
persistentVolumeClaim:
claimName: mypvc
Note: You need to adjust and test it.
Upvotes: 2
Reputation: 131
From the example above
...
extraMounts:
- hostPath: /home/bill/work/www
containerPath: /www
...
So the path in your host (Laptop) is /home/bill/work/www
and the path in the kubernetes node is /www
You are running kind and can make use of the fact it's running docker to check the nodes. Do a
docker ps -a
This will show you the kind docker images, which are all kubernetes nodes. So you can check the nodes by taking a CONTAINER_ID take from docker ps -a from above and do a
docker exec -it CONTAINER_ID /bin/bash
So now you have a shell running on that node. Check if the node has mounted you host filesystem properly
Just check with
ls /www
on the node. You should see the content of /home/bill/work/www
So what you have archived is that this part of the node filesystem is persisted by the host (Laptop). So you can destroy the cluster and recreate it with the same kind-config file. The node will remount and no information is lost.
So with this working setup you can make a persistan volume (pv) and claim this pv with a persistent volume claim (pvc) as described above.
Hope this helps.
Upvotes: 4
Reputation: 385
I would like to add that to minimise the specific configuration to Kind
you should use pv / pvc
this way the configuration on a real cluster will only differ in the definition of pv.
So if you configure extraMounts on your Kind cluster:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/bill/work/www
containerPath: /www
Then on that cluster create PV and PVC:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-www
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
hostPath:
path: /www/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-www
spec:
volumeName: pv-www
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
After that you can use it in deployment like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: www
persistentVolumeClaim:
claimName: pvc-www
containers:
- name: nginx
image: nginx:1.14.2
volumeMounts:
- name: www
mountPath: /var/www
As a result your local /home/bill/work/www
will be mounted to /var/www
inside containers.
Upvotes: 26
Reputation: 159875
When you create your kind cluster you can specify host directories to be mounted on a virtual node. If you do that, then you can configure volumes with hostPath
storage, and they will refer to the mount paths on the node.
So you would create a kind config file:
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/bill/work/foo
containerPath: /foo
and then run
kind create cluster --config kind-config.yaml
to create the cluster.
In your Kubernetes YAML file, you need to mount that containerPath
as a "host path" on the node. A pod spec might contain in part:
volumes:
- name: foo
hostPath:
path: /foo # matches kind containerPath:
containers:
- name: foo
volumeMounts:
- name: foo
mountPath: /data # in the container filesystem
Note that this setup is extremely specific to kind. Host paths aren't reliable storage in general: you can't control which node a pod gets scheduled on, and both pods and nodes can get deleted in real-world clusters. In some hosted setups (AWS EKS, Google GKE) you may not be able to control the host content at all.
You might revisit your application design to minimize the need for "files" as first-class objects. Rather than "update the volume" consider deploying a new Docker image with updated content; rather than "copy files out" consider an HTTP service you can expose through an ingress controller.
Upvotes: 34