Reputation: 868
I am trying to deploy a persistentvolume for 3 pods to work on and i want to use the cluster's node storage i.e. not an external storage like ebs spin off.
To achieve the above i did the following experiment's -
1) I applied only the below PVC resource defined below -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
This spin's up a storage set by default storageclass, which in my case was digital ocean's volume. So it created a 1Gi volume.
2) Created a PV resource and PVC resource like below -
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
Post this i see my claim is bound.
pavan@p1:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv1 Bound task-pv-volume 10Gi RWO manual 2m5s
pavan@p1:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 118m
pavan@p1:~$ kubectl describe pvc
Name: pv1
Namespace: default
StorageClass: manual
Status: Bound
Volume: task-pv-volume
Labels: io.kompose.service=pv1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"io.kompose.service":"mo...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 28s (x8 over 2m2s) persistentvolume-controller storageclass.storage.k8s.io "manual" not found
Below are my questions that i am hoping to get answers/pointers to -
The above warning, storage class could not be found, do i need to create one? If so, can you tell me why and how? or any pointer. (Somehow this link misses to state that - https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity? Can't i share the same PV capacity with other PVCs?
For question 2) If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?
Upvotes: 1
Views: 4183
Reputation: 6743
IMO the question was a bit prematurely specialized to sharing a single pv
rather than a single storageclass
(the latter being perfectly re-usable between namespaces and pods).
More info: using dynamic provisioning.
Upvotes: 0
Reputation: 14084
I was trying to reproduce all behavior to answer all your questions. However, I don't have access to DigitalOcean, so I tested it on GKE.
The above warning, storage class could not be found, do i need to create one?
According to the documentation and best practices, it is highly recommended to create a storageclass
and later create PV / PVC based on it. However, there is something called "manual provisioning". Which you did in this case.
Manual provisioning is when you need to manually create a PV first, and then a PVC with matching spec.storageClassName:
field. Examples:
default storageclass
, PV
and storageClassName
parameter (afaik kubeadm
is not providing default storageclass
) - PVC will be stuck on Pending
with event: no persistent volumes available for this claim and no storage class is set
.default storageclass
setup on cluster but without storageClassName
parameter it will be created based on default storageclass
.storageClassName
parameter (somewhere in the Cloud, Minikube, or Microk8s), PVC will also get stuck Pending
with this warning: storageclass.storage.k8s.io "manual" not found.
However, if you create PV with the same
storageClassName` parameter, it will be bound in a while.Example:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Available manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Pending manual 4m12s
...
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Bound task-pv-volume 10Gi RWO manual 4m17s
The disadvantage of manual provisioning
is that you have to create PV for each PVC (only 1:1 pairings will work). If you use storageclass
, you can just create PVC
.
If so, can you tell me why and how? or any pointer.
You can use documentation examples or check here. As you are using a Cloud provider with default storageclass
(or sc
for short) set up for you, you can export it to a yaml file by:
$ kubectl get sc -o yaml >> storageclass.yaml
(you will then need to clean it up, removing unique metadata, before you can reuse it).
Or, if you have more than one sc
, you have to specify which one. Names of storageclass
can be obtained by
$ kubectl get sc
.
Later you can refer to K8s API to customize your storageclass
.
Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity?
You created manually a PV with 10Gi and the PVC requested 1Gi. As PVC and PV are bound 1:1 to each other, PVC searched for a PV which meets all conditions and has bound to it. PVC ("pv1") requested 1Gi and the PV ("task-pv-volume") met those requirements, so Kubernetes bound them. Unfortunately much of the space was wasted in this case.
Can't i share the same PV capacity with other PVCs
Unfortunately, you cannot bind more than 1 PVC to the same PV as the relationship between PVC and PV is 1:1, but you can configure many pods or deployments to use the same PVC (within the same namespace).
I can advise you to look at this SO case, as it explains AccessMode
specifics very well.
If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?
As I mentioned before, if you create PV manually with a specific size and a PVC bound to it, which requests less storage, the extra space will be wasted. So, you have to create PV and PVC with the same resource request, or let storageclass
adjust the storage based on the PVC request.
Upvotes: 5
Reputation: 3328
Yes, you have to create storage class, check, but I guess digital-ocean
provide default storage class, you can check it with:
kubectl get storageclasses
You can share one PV, but only in read-only
access, if you need write access for all pods you have to create multiple PV, check
Upvotes: 1