Viktor Hedefalk
Viktor Hedefalk

Reputation: 3914

Kubernetes Persistent Volume on GKE not mounting

I have a Kubernetes setup for a mongo db with a Persistent Volume in GKE looking like this:


apiVersion: v1
kind: PersistentVolume
metadata:
  name: kb-front-db-pv
  labels:
    volume: kb-front-volume
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 15Gi
  storageClassName: standard
  gcePersistentDisk:
    pdName: kb-front-db
    fsType: xfs
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kb-front-db-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      volume: kb-front-volume
---    
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
  name: kb-front-db
  labels:
    app: kb-front-db
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kb-front-db
  template:
    metadata:
      labels:
        app: kb-front-db
    spec:
      containers:
      - name: kb-front-mongo
        image: mongo:4.1.13-bionic
        livenessProbe:
          exec:
            command:
            - mongo
            - --eval
            - "db.adminCommand('ping')"
        readinessProbe:
          exec:
            command:
            - mongo
            - --eval
            - "db.adminCommand('ping')"
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: database-mount
          mountPath: "/data/db"
      volumes:
      - name: database-mount
        persistentVolumeClaim:
          claimName: kb-front-db-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: kb-front-db
spec:
  ports: 
  - port: 27017
    protocol: TCP
  selector:
    app: kb-front-db

I have created a disk named kb-front-db in europe-north1-a on GCE and I get this:

~/d/p/kb-ops ❯❯❯ kubectl describe pv kb-front-db-pv                                                                                                                                              ⏎ master ✖ ✱ ◼
Name:              kb-front-db-pv
Labels:            failure-domain.beta.kubernetes.io/region=europe-north1
                   failure-domain.beta.kubernetes.io/zone=europe-north1-a
                   volume=kb-front-volume
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"volume":"kb-front-volume"},"name":"kb-front-db-pv"},"...
                   pv.kubernetes.io/bound-by-controller: yes
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      standard
Status:            Bound
Claim:             default/kb-front-db-pvc
Reclaim Policy:    Retain
Access Modes:      RWO
Capacity:          15Gi
Node Affinity:
  Required Terms:
    Term 0:        failure-domain.beta.kubernetes.io/region in [europe-north1]
                   failure-domain.beta.kubernetes.io/zone in [europe-north1-a]
Message:
Source:
    Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:     kb-front-db
    FSType:     xfs
    Partition:  0
    ReadOnly:   false
Events:         <none>

Searching for these error labels I came to

https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domain-beta-kubernetes-io-region

but I simply do not understand this text.

The disk seems to be fine looking in google console. The volume claim is bound:

~/d/p/kb-ops ❯❯❯ kubectl describe pvc kb-front-db-pvc                                                                                                                                              master ✖ ✱ ◼
Name:          kb-front-db-pvc
Namespace:     default
StorageClass:  standard
Status:        Bound
Volume:        kb-front-db-pv
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"kb-front-db-pvc","namespace":"default"},"spec":{"ac...
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      15Gi
Access Modes:  RWO
Events:        <none>
Mounted By:    kb-front-db-wzx2c

but the pod serving the mongo instance get stuck in ContainerCreating with the following errors:

Events:
  Type     Reason       Age                    From                                                     Message
  ----     ------       ----                   ----                                                     -------
  Warning  FailedMount  7m12s (x448 over 17h)  kubelet, gke-woodenstake-cluster-1-pool-2-2edf41e5-fdrg  Unable to mount volumes for pod "kb-front-db-wzx2c_default(ba352929-968a-11e9-afbe-42010aa600fd)": timeout expired waiting for volumes to attach or mount for pod "default"/"kb-front-db-wzx2c". list of unmounted volumes=[database-mount]. list of unattached volumes=[database-mount default-token-6pb9l]
  Warning  FailedMount  2m30s (x505 over 17h)  kubelet, gke-woodenstake-cluster-1-pool-2-2edf41e5-fdrg  MountVolume.MountDevice failed for volume "kb-front-db-pv" : executable file not found in $PATH

What does this mean?

Upvotes: 0

Views: 1549

Answers (2)

Mark
Mark

Reputation: 4067

Storage drive support for XFS is not supported on GKE COS.

You can usie diffeent Node image (Ubuntu) for this task.

Please refer to Storage driver support

Hope this help.

Upvotes: 4

Viktor Hedefalk
Viktor Hedefalk

Reputation: 3914

Turned out that error:

executable file not found in $PATH

was coming from trying to mount xfs. So changing it to ext4 on the PersistantVolume and re-creating made it work. Would like to use xfs though so I'll revisit soonish…

Upvotes: 2

Related Questions