Reputation: 3526
I'm running a MySQL deployment on Kubernetes however seems like my allocated space was not enough, initially I added a persistent volume of 50GB
and now I'd like to expand that to 100GB
.
I already saw the a persistent volume claim is immutable after creation, but can I somehow just resize the persistent volume and then recreate my claim?
Upvotes: 71
Views: 124844
Reputation: 14233
I have update the volume size using the below commands in eks cluster
allow volume resizing for the default storage class:
kubectl patch sc gp2 -p '{"allowVolumeExpansion": true}'
resize the volume to 100gb:
kubectl patch pvc your-pvc-name -n namespce -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'
track the progress of the resize operation (it will not succeed if there are no nodes available in the cluster):
kubectl get events -n namespace
Upvotes: 0
Reputation: 8684
Below is how we can expand the volume size of azure disks mounted on statefulset(STS) pod when storage class is used.(AWS EBS and GCP Persistent volumes should be similar).
Summary:
Complete Steps:
Check if volume resize is enabled in the storage class.
kubectl get storageclass
First, delete the statefulset. This is required because
We will have to create a new STS with higher volume size later on. Don't forget to backup the STS YAML if you don't have it in your repo's.
After deleting the STS, wait for some time so that k8s can detach the volume from the node.
Next, modify the PVC with higher value for volume size.
At this point, if the volume is still attached, you will see below warning message in the PVC events.
Either the volume is still mounted to the pod or you just have to wait and give some time to k8s.
Next, run the describe command on the PVC, you should now see a message(in conditions) prompting you to start up the pod.
kubectl describe pvc app-master-volume-app-master-0
In the earlier step, we had deleted the statefulset. Now we need to create and apply a new STS with higher volume size. This should match the value earlier modified in the PVC spec. When the new pod gets created, you will see pod event like shown below which indicates that the volume resize is successful.
Upvotes: 2
Reputation: 1
I have persistent volume with self created StorageClass (allowVolumeExpansion: true).
PV spec: accessMode: readWriteOnce
PVC spec: same
When I upgrade PV, changes are not reflected in PVC.
Upvotes: 0
Reputation: 443
Edit the PVC (kubectl edit pvc $your_pvc) to specify the new size. The key to edit is spec.resources.requests.storage:
Even though this answer worked quite well for one pvc of my statefulset, the others didn't managed to resize. I guess it's because the pods restarted too quick, leaving no time for the resizing process to start due to the backoff. In fact, the pods started fast but took some time to be considered as ready (increasing backoff).
Here's my workaround:
Update the pvc
Backup the sts spec
k get sts <sts-name> -o yaml > sts.yaml
Then delete the sts with cascade=orphan. Thus, the pods will still run
kubectl delete sts --cascade=orphan <sts-name>
Then delete one of the pod whose pvc wouldn't resize
kubectl delete pod <pod-name>
Wait for the pvc to resize
kubectl get pvc -w
Reapply the sts so the pod comes back
kubectl apply -f sts.yaml
Wait for the pod to come back
Repeat until all pvc are resized!
Upvotes: 1
Reputation: 84
The first thing you can do is, check for the storage class that you are using, see if allowVolumeExpansion
is set to `true. If yes then simply update PVC with requested volume and check for status in PVCs.
If this doesn't work for you then try this (for AWS
users).
awsElasticBlockStore
-> `volume).lsblk
to list the volume attachedresize2fs
or xfs_growfs
based on what type of volume you have.df -h
and check the volume.Note: You can only modify a volume once in 6 hours.
Upvotes: 4
Reputation: 1840
Update: volume expansion is available as a beta feature starting Kubernetes v1.11 for in-tree volume plugins. It is also available as a beta feature for volumes backed by CSI drivers as of Kubernetes v1.16.
If the volume plugin or CSI driver for your volume support volume expansion, you can resize a volume via the Kubernetes API:
allowVolumeExpansion: true
is set on the StorageClass) associated with your PVC.spec.resources.requests
).For more information, see:
No, Kubernetes does not support automatic volume resizing yet.
Disk resizing is an entirely manual process at the moment.
Assuming that you created a Kubernetes PV object with a given capacity and the PV is bound to a PVC, and then attached/mounted to a node for use by a pod. If you increase the volume size, pods would continue to be able to use the disk without issue, however they would not have access to the additional space.
To enable the additional space on the volume, you must manually resize the partitions. You can do that by following the instructions here. You'd have to delete the pods referencing the volume first, wait for it to detach, than manually attach/mount the volume to some VM instance you have access to, and run through the required steps to resize it.
Opened issue #35941 to track the feature request.
Upvotes: 13
Reputation: 38113
Yes, as of 1.11, persistent volumes can be resized on certain cloud providers. To increase volume size:
kubectl edit pvc $your_pvc
) to specify the new size. The key to edit is spec.resources.requests.storage
:Once the pod using the volume is terminated, the filesystem is expanded and the size of the PV
is increased. See the above link for details.
Upvotes: 91
Reputation: 19585
There is some support for this in 1.8 and above, for some volume types, including gcePersistentDisk
and awsBlockStore
, if certain experimental features are enabled on the cluster.
For other volume types, it must be done manually for now. In addition, support for doing this automatically while pods are online (nice!) is coming in a future version (currently slated for 1.11):
For now, these are the steps I followed to do this manually with an AzureDisk
volume type (for managed disks) which currently does not support persistent disk resize (but support is coming for this too):
Bound
. Take special care for stateful sets that are managed by an operator, such as Prometheus -- the operator may need to be disabled temporarily. It may also be possible to use Scale
to do one pod at a time. This may take a few minutes, be patient.e2fsck
and resize2fs
to resize the filesystem on the PV (assuming an ext3/4 FS). Unmount the disks.Released
.Available
:
spec.capacity.storage
,spec.claimref
uid
and resourceVersion
fields, andstatus.phase
.metadata.resourceVersion
field,pv.kubernetes.io/bind-completed
and pv.kubernetes.io/bound-by-controller
annotations, andspec.resources.requests.storage
field to the updated PV size, andstatus
.Pending
state, but both the PV and PVC should transition relatively quickly to Bound
.Upvotes: 7
Reputation: 59
Yes, it can be, after version 1.8, have a look at volume expansion here
Volume expansion was introduced in v1.8 as an Alpha feature
Upvotes: 0
Reputation: 1649
It is possible in Kubernetes 1.9 (alpha in 1.8) for some volume types: gcePersistentDisk, awsElasticBlockStore, Cinder, glusterfs, rbd
It requires enabling the PersistentVolumeClaimResize
admission plug-in and storage classes whose allowVolumeExpansion
field is set to true.
See official docs at https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims
Upvotes: 19
Reputation: 1146
In terms of PVC/PV 'resizing', that's still not supported in k8s, though I believe it could potentially arrive in 1.9
It's possible to achieve the same end result by dealing with PVC/PV and (e.g.) GCE PD though..
For example, I had a gitlab deployment, with a PVC and a dynamically provisioned PV via a StorageClass resource. Here are the steps I ran through:
kubectl describe pv <name-of-pv>
(useful when creating the PV manifest later)"gcePersistentDisk: pdName: <name-of-pd>"
defined, along with other details that I'd grabbed at step 3. make sure you update the spec.capacity.storage to the new capacity you want the PV to have (although not essential, and has no effect here, you may want to update the storage capacity/value in your PVC manifest, for posterity)kubectl apply
(or equivalent) to recreate your deployment/pod, PVC and PVnote: some steps may not be essential, such as deleting some of the existing deployment/pod.. resources, though I personally prefer to remove them, seeing as I know the ReclaimPolicy is Retain, and I have a snapshot.
Upvotes: 4