Reputation: 51
I want use the Kubernetes feature of dynamic resize PVC. After I edit the PVC size to a large one, only the PV size has changed, but PVC status still is FileSystemResizePending
. My Kubernetes version is 1.15.3
, in the normal situation the filesystem will expand automatically. Even if I recreate the pod, the PVC status still is FileSystemResizePending
, and size not change.
The CSI driver is aws-ebs-csi-driver
, version is alpha.
Kubernetes version is 1.15.3
.
Feature-gates like this:
--feature-gates=ExpandInUsePersistentVolumes=true,CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,ExpandCSIVolumes=true
Create StorageClass file is :
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
PV status:
kubectl describe pv pvc-44bbcd26-2d7c-4e42-a426-7803efb6a5e7
Name: pvc-44bbcd26-2d7c-4e42-a426-7803efb6a5e7
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
Finalizers: [kubernetes.io/pv-protection external-attacher/ebs-csi-aws-com]
StorageClass: ebs-sc
Status: Bound
Claim: default/test
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 25Gi
Node Affinity:
Required Terms:
Term 0: topology.ebs.csi.aws.com/zone in [ap-southeast-1b]
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: ebs.csi.aws.com
VolumeHandle: vol-0beb77489a4b06f4c
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1568278824948-8081-ebs.csi.aws.com
Events: <none>
PVC status:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
creationTimestamp: "2019-09-12T09:08:09Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: test
name: test
namespace: default
resourceVersion: "5467113"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/test
uid: 44bbcd26-2d7c-4e42-a426-7803efb6a5e7
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 25Gi
storageClassName: ebs-sc
volumeMode: Filesystem
volumeName: pvc-44bbcd26-2d7c-4e42-a426-7803efb6a5e7
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-09-12T09:10:29Z"
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending
phase: Bound
I expect the PVC size will change to the value which I specified. But the PVC status always keep FileSystemResizePending
.
Upvotes: 3
Views: 8670
Reputation: 11
Basically down the line resizefs is resizing the amount of space requested, which may take time according to the size of space requested.
You should not delete the pod since unmounting of volume is not supported when resize is in progress. Gives following error:
Output: umount: /var/nutanix/var/lib/kubelet/pods/36862d8a-e0bf-4d0f-bdd3-c897a4ed5ccd/volumes/kubernetes.io~csi/pvc-acad4f90-9811-4371-9512-3e14ed1cbc64/mount: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
Even if pod is deleted and is stuck in terminating state, it will ultimately terminate when resize of filesystem completes and a new pod is created.
Upvotes: 1
Reputation: 8983
Right in your pvc status you can see a reason:
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node
You should restart a pod which use that PV, which will cause a remount of PV and FS will be resized before next mount.
Not all file systems can be resized on-flight, so I think that is just a compatibility behavior. Also, that is more safe anyway.
Upvotes: 6