Michael Böckling
Michael Böckling

Reputation: 7872

Why do I get "unbound immediate PersistentVolumeClaims" on Minikube?

I get "pod has unbound immediate PersistentVolumeClaims", and I don't know why. I run minikube v0.34.1 on macOS. Here are the configs:

es-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: elasticsearch
spec:
  capacity:
    storage: 400Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/elasticsearch/"

es-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
        - name: elasticsearch
          image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3
          resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
          ports:
            - containerPort: 9200
              name: rest
              protocol: TCP
            - containerPort: 9300
              name: inter-node
              protocol: TCP
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
          env:
            - name: cluster.name
              value: k8s-logs
            - name: node.name
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: discovery.zen.ping.unicast.hosts
              value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
            - name: discovery.zen.minimum_master_nodes
              value: "2"
            - name: ES_JAVA_OPTS
              value: "-Xms256m -Xmx256m"
      initContainers:
        - name: fix-permissions
          image: busybox
          command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
          securityContext:
            privileged: true
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
        - name: increase-vm-max-map
          image: busybox
          command: ["sysctl", "-w", "vm.max_map_count=262144"]
          securityContext:
            privileged: true
        - name: increase-fd-ulimit
          image: busybox
          command: ["sh", "-c", "ulimit -n 65536"]
          securityContext:
            privileged: true
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: "standard"
        resources:
          requests:
            storage: 100Mi

es-svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node

Upvotes: 13

Views: 16273

Answers (4)

Przemek Nowak
Przemek Nowak

Reputation: 7703

I would also recommend first check whether you don't have any existing previous PVC created with some wrong storage class, etc.

Please take into account that even if you remove STS the PVC won't be deleted automatically and could be "awaiting" with wrong storage class - then the modified STS could have a still problem even if you fix claim template :)

Upvotes: 0

DataPlug
DataPlug

Reputation: 348

If you're developing locally (on minikube) and want to avoid this error you can just comment out this line

storageClassName: "standard"

that way you won't need to create a PV or manual StorageClass. It will be dynamically provisioned on minikube. This is explained in the Kubernetes docs here

If this doesn't work, make sure you have dynamic provisioning enabled on your minikube using.

minikube addons enable storage-provisioner

then check using:

minikube addons list

You should see this:

|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | third-party (ambassador)       |
| auto-pause                  | minikube | disabled     | google                         |
| csi-hostpath-driver         | minikube | disabled     | kubernetes                     |
| dashboard                   | minikube | enabled ✅   | kubernetes                     |
| default-storageclass        | minikube | enabled ✅   | kubernetes                     |
| efk                         | minikube | disabled     | third-party (elastic)          |
| freshpod                    | minikube | disabled     | google                         |
| gcp-auth                    | minikube | disabled     | google                         |
| gvisor                      | minikube | disabled     | google                         |
| helm-tiller                 | minikube | disabled     | third-party (helm)             |
| ingress                     | minikube | disabled     | unknown (third-party)          |
| ingress-dns                 | minikube | disabled     | google                         |
| istio                       | minikube | disabled     | third-party (istio)            |
| istio-provisioner           | minikube | disabled     | third-party (istio)            |
| kong                        | minikube | disabled     | third-party (Kong HQ)          |
| kubevirt                    | minikube | disabled     | third-party (kubevirt)         |
| logviewer                   | minikube | disabled     | unknown (third-party)          |
| metallb                     | minikube | disabled     | third-party (metallb)          |
| metrics-server              | minikube | disabled     | kubernetes                     |
| nvidia-driver-installer     | minikube | disabled     | google                         |
| nvidia-gpu-device-plugin    | minikube | disabled     | third-party (nvidia)           |
| olm                         | minikube | disabled     | third-party (operator          |
|                             |          |              | framework)                     |
| pod-security-policy         | minikube | disabled     | unknown (third-party)          |
| portainer                   | minikube | disabled     | portainer.io                   |
| registry                    | minikube | disabled     | google                         |
| registry-aliases            | minikube | disabled     | unknown (third-party)          |
| registry-creds              | minikube | disabled     | third-party (upmc enterprises) |
| storage-provisioner         | minikube | enabled ✅   | google                         |
| storage-provisioner-gluster | minikube | disabled     | unknown (third-party)          |
| volumesnapshots             | minikube | disabled     | kubernetes                     |
|-----------------------------|----------|--------------|--------------------------------|

Upvotes: 0

uname0a
uname0a

Reputation: 1

Your can use environment vars for subpath names. Like this:

env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
volumeMounts:
    - name: workdir1
      mountPath: /logs
      subPathExpr: $(POD_NAME)

Upvotes: 0

Suresh Vishnoi
Suresh Vishnoi

Reputation: 18373

In order to make a volume accessed my many pods, the accessModes need to be "ReadWriteMany" . Also if each pod wants to have its own directory then subPath need to be used.

As the issue was resolved in comment section @Michael Böckling . Here is further information using-subpath

volumeMounts: 
- name: data 
mountPath: /usr/share/elasticsearch/data 
subPath: $(POD_NAME) 

Upvotes: 12

Related Questions