Reputation: 5149
I have a helm chart that contains PV and PVC to mount NFS volumes and this works fine. I need to install this helm chart on a new cluster that has very strict and limited security measures and I also see that my pods are pending because they can't mount the NFS.
After some investigations, I found out that the problem is the PVC and PV have different storageClassName:
kubectl -n 57 describe pvc gstreamer-claim
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning VolumeMismatch 98s (x83 over 21m) persistentvolume-controller Cannot bind to requested volume "gstreamer-57": storageClassName does not match
This is very strange since the PVC in my helm chart doesn't have any storageClassName at all: PVC:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gstreamer-claim
namespace: {{ .Release.Namespace }}
spec:
volumeName: gstreamer-{{ .Release.Namespace }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
PV:
- apiVersion: v1
kind: PersistentVolume
metadata:
name: gstreamer-{{ .Release.Namespace }}
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: {{ .Values.global.nfsserver }}
path: /var/nfs/general/gstreamer-{{ .Release.Namespace }}
I tried to edit the PVC but I was not able to change it.
Why this is happening? Can it be related to the cluster security? How to fix this?
Storage class info:
kubectl -n 57 get sc
NAME PROVISIONER AGE
local-storage (default) kubernetes.io/no-provisioner 54d
nfs-client cluster.local/nfs-client-nfs-client-provisioner 43m
kubectl -n 57 get sc local-storage -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
storageclass.kubernetes.io/is-default-class: "true"
creationTimestamp: "2020-03-31T20:46:39Z"
name: local-storage
resourceVersion: "458"
selfLink: /apis/storage.k8s.io/v1/storageclasses/local-storage
uid: b8352eb1-7390-11ea-84a7-fa163e393634
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Upvotes: 2
Views: 2926
Reputation: 44559
With Dynamic provisioning you don't need to create a PV explicitly. Create a PVC with storage class nfs-client
.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gstreamer-claim
namespace: {{ .Release.Namespace }}
spec:
volumeName: gstreamer-{{ .Release.Namespace }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-client
Another option would be to make nfs-client
as the default storage class and there will be no need to specify storageClassName: nfs-client
in PVC.
Upvotes: 3