Nrgyzer
Nrgyzer

Reputation: 1075

Resized PV & PVC are not applicied to Pod

I've a AKS cluster and I'm trying to resize the PVC used. Actually the PVC has a capacity of 5Gi and I already resized it to 25Gi:

> kubectl describe pv

Name:              mypv
Labels:            failure-domain.beta.kubernetes.io/region=northeurope
Annotations:       pv.kubernetes.io/bound-by-controller: yes
                   pv.kubernetes.io/provisioned-by: kubernetes.io/azure-disk
                   volumehelper.VolumeDynamicallyCreatedByKey: azure-disk-dynamic-provisioner
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      default
Status:            Bound
Claim:             default/test-pvc
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          25Gi
...

> kubectl describe pvc

Name:          test-pvc
Namespace:     default
StorageClass:  default
Status:        Bound
Volume:        mypv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      25Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    mypod
Events:        <none>

But when I call "df -h" in mypod, it still shows me 5Gi (see /dev/sdc):

/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                 123.9G     22.3G    101.6G  18% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                     1.9G         0      1.9G   0% /sys/fs/cgroup
/dev/sdb1               123.9G     22.3G    101.6G  18% /dev/termination-log
shm                      64.0M         0     64.0M   0% /dev/shm
/dev/sdb1               123.9G     22.3G    101.6G  18% /etc/resolv.conf
/dev/sdb1               123.9G     22.3G    101.6G  18% /etc/hostname
/dev/sdb1               123.9G     22.3G    101.6G  18% /etc/hosts
/dev/sdc                  4.9G      4.4G    448.1M  91% /var/lib/mydb
tmpfs                     1.9G     12.0K      1.9G   0% /run/secrets/kubernetes.io/serviceaccount
tmpfs                     1.9G         0      1.9G   0% /proc/acpi
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                     1.9G         0      1.9G   0% /proc/scsi
tmpfs                     1.9G         0      1.9G   0% /sys/firmware

I already destroyed my pod and even my deployment but it still show 5Gi. Any idea how I can use the entire 25Gi in my pod?

SOLUTION

Thank you mario for the long response. Unfortunately the aks dasboard already showed me that the disk has 25GB. But calling the following returned 5GB:

az disk show --ids /subscriptions/<doesn't matter :-)>/resourceGroups/<doesn't matter :-)>/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-27ee71a5-<doesn't matter> --query "diskSizeGb"

So I finally called az disk update --ids <disk-id> --size-gb 25. Now, the command above returned 25 and I started my pod again. Since my pod uses Alpine Linux, it's not resizing the disk automatically and I had to do it manually:

/ # apk add e2fsprogs-extra
(1/6) Installing libblkid (2.34-r1)
(2/6) Installing libcom_err (1.45.5-r0)
(3/6) Installing e2fsprogs-libs (1.45.5-r0)
(4/6) Installing libuuid (2.34-r1)
(5/6) Installing e2fsprogs (1.45.5-r0)
(6/6) Installing e2fsprogs-extra (1.45.5-r0)
Executing busybox-1.31.1-r9.trigger
OK: 48 MiB in 31 packages
/ # resize2fs /dev/sdc
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/sdc is mounted on /var/lib/<something :-)>; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 4
The filesystem on /dev/sdc is now 6553600 (4k) blocks long.

Note: In my pod I set the privileged-mode temporarely to true:

...
spec:
  containers:
  - name: mypod
    image: the-image:version
    securityContext:
      privileged: true
    ports:
    ...

Otherwise resize2fs failed and say's something like "no such device or similar" (sorry, don't know the exact error message anymore - forgot to copy).

Upvotes: 5

Views: 3316

Answers (1)

mario
mario

Reputation: 11108

I think this GitHub thread should answer your question.

As you can read there:

... I've tried resizing the persistent volume by adding allowVolumeExpansion: true for the storage class and editing the pvc to the desired size.

I assume that you've already done the above steps as well.

Reading on, the issue looks exactly as yours:

After restarting the pod the size of the pvc has changed to the desired size i.e from 2Ti -> 3Ti

kubectl get pvc
mongo-0     Bound     pvc-xxxx   3Ti        RWO            managed-premium   1h

but when i login to the pod and do a df -h the disk size still remains at 2Ti.

kubetl exec -it mongo-0 bash
root@mongo-0:/# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc        2.0T  372M  2.0T   1% /mongodb

Now let's take a look at the possible solution:

I couldn't see any changes in the portal when i update the pvc. I had to update the disk size in portal first - edit the pvc accordingly and then deleting the pod made it to work. Thanks

So please check the size of the disk in Azure portal and if you see its size unchanged, this might be the case.

Otherwise make sure you followed the steps mentioned in this comment however you don't get any error message when describing your PVC like VolumeResizeFailed so I believe this is not your case and before resizing it was properly detached from node. So first of all make sure there is no discrepancy between volume size in portal and information that you can see by describing your PVC.

Upvotes: 1

Related Questions