Reputation: 2348
I have a storage class :
kubectl describe storageclass my-local-storage
Name: my-local-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"my-local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
Peristent Volume
kubectl describe pv my-local-pv
Name: my-local-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: my-local-storage
Status: Bound
Claim: default/my-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Mi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [kubenode2]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /home/node/serviceLogsNew
Events: <none>
Persistent Volume Claim
node@kubemaster:~/Desktop$ kubectl describe pvc my-claim
Name: my-claim
Namespace: default
StorageClass: my-local-storage
Status: Bound
Volume: my-local-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Mi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: podname-deployment-897d6947b-hnvvq
podname-deployment-897d6947b-q4f79
Events: <none>
Now, I have created a persistent Volume with capacity: 1Mi.
I am running 2 pods attached to PV using PVC. Pods are creating log files. The size of files inside the folder (/home/node/serviceLogsNew) used for PV grows to 5 MB. Still everything is working fine.
So, capacity is ignored while using Local PV / PVC? Is it configurable?
Upvotes: 0
Views: 2595
Reputation: 245
I just ran into a similar issue where I wanted to update the PVC to allow for resiszing.
kubectl describe storageclass kafka
Name: kafka
IsDefaultClass: No
Annotations: meta.helm.sh/release-name=kafka,meta.helm.sh/release-namespace=data
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
When I tried to update the PVC I got the following:
error: persistentvolumeclaims "datadir-kafka" could not be patched: persistentvolumeclaims "datadir-kafka" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
All that I had to do was edit the storageclass
of the PVC and add in allowVolumeExpansion: true
.
kubectl edit storageclass kafka
...
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
allowVolumeExpansion: true # Added
volumeBindingMode: Immediate
I was now able to resize my the PVC volume.
Upvotes: 0
Reputation: 11098
Please take a look at this github issue. I believe that this comment answers also your question:
this is working as intended, kube can't/won't enforce the capacity of PVs, the capacity field on PVs is just a label. It's up to the "administrator" i.e. the creator of the PV to label it accurately so that when users create PVCs that needs >= X Gi, they get what they want.
This advise may be also useful in your case:
... If you want hard capacity boundaries with hostpath, then you should create a partition with the size you need, or use filesystem quota.
If this is just ephemeral data, then you can consider using emptyDir volumes. Starting in 1.7, you can specify a limit on the capacity, and kubelet will evict your pod if you exceed the limit.
The person who reported the issue actually uses hostPath volume type but local works pretty much the same and same rules are applied here when it comes to setting capacity
in PV
definition. Kubernetes doesn't have any mechanisms which could enforce a specific disk quota on the directory you mount into your Pod
from node
.
Note that in your PV
definition you can set a capacity
which is much higher than the actual capacity of the underlying disk. Such PV
will be created without any errors and will be usable allowing you to write data up to its actual maximum capacity.
While capacity
in PV
definition is just a mere lablel, with PVC
it's a bit different story. In this context capacity
can be interpreted as a request for a specific minimal capacity. If your storage provisioner is able to satisfy your request, the storage will be provisioned. If it's unable to give you the storage with minimal capacity defined in your claim, it won't be provisioned.
Let's assume you have defined a PV
based on a specific directory on your host/node with the capacity of 150Gi
. If you define a PVC
in which you claim for 151Gi
, the storage won't be provisioned as PV
with the declared capacity
(no matter if it is a real or some made up value) won't be able to satisfy the request set in our PVC
. So in case of PVC
, the capacity
can be interpreted as a kind of constraint but it still can't enforce/limit the use of actually available underlying storage.
Don't forget that a local volume represents a mounted local storage device such as a disk, partition or directory so it's not only the directory that you can use. It can be e.g. your /dev/sdb
disk or /dev/sda5
partition. You can also decide to use LVM partition with strictly defined capacity.
Upvotes: 7