e7lT2P
e7lT2P

Reputation: 1941

0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims

As the documentation states:

For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example above, each Pod receives a single PersistentVolume with a StorageClass of my-storage-class and 1 Gib of provisioned storage. If no StorageClass is specified, then the default StorageClass will be used. When a Pod is (re)scheduled onto a node, its volumeMounts mount the PersistentVolumes associated with its PersistentVolume Claims. Note that, the PersistentVolumes associated with the Pods' PersistentVolume Claims are not deleted when the Pods, or StatefulSet are deleted. This must be done manually.

The part I'm interested in is this: If no StorageClassis specified, then the default StorageClass will be used

I create a StatefulSet like this:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: ches
  name: ches
spec:
  serviceName: ches
  replicas: 1
  selector:
    matchLabels:
      app: ches
  template:
    metadata:
      labels:
        app: ches
    spec:
      serviceAccountName: ches-serviceaccount
      nodeSelector:
        ches-worker: "true"
      volumes:
      - name: data
        hostPath:
          path: /data/test
      containers:
      - name: ches
        image: [here I have the repo]
        imagePullPolicy: Always
        securityContext:
            privileged: true
        args:
        - server
        - --console-address
        - :9011
        - /data
        env:
        - name: MINIO_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: ches-keys
              key: access-key
        - name: MINIO_SECRET_KEY
          valueFrom:
            secretKeyRef:
              name: ches-keys
              key: secret-key
        ports:
        - containerPort: 9000
          hostPort: 9011
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: data
          mountPath: /data
      imagePullSecrets:
        - name: edge-storage-token
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

Of course I have already created the secrets, imagePullSecrets etc and I have labeled the node as ches-worker.

When I apply the yaml file, the pod is in Pending status and kubectl describe pod ches-0 -n ches gives the following error:

Warning FailedScheduling 6s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling

Am I missing something here?

Upvotes: 15

Views: 88763

Answers (6)

Mafei
Mafei

Reputation: 3819

This error typically means that your PersistentVolumeClaim (PVC) is not bound to a PersistentVolume (PV), preventing the pod from scheduling. Here’s how to troubleshoot and fix it:

1. Check Your PVC Status

Run:

kubectl get pvc

If your PVC is stuck in Pending state, it means Kubernetes cannot find a matching PersistentVolume (PV).

2. Check Available PVs

Run:

kubectl get pv

Ensure that there is a PV available that matches the storage class, access mode, and capacity required by your PVC.

3. Check Minikube Storage Provisioner

Minikube usually provides a default StorageClass called standard. Check it using:

kubectl get storageclass

If it's missing, you can try enabling the Minikube storage provisioner:

minikube addons enable storage-provisioner
minikube addons enable default-storageclass

Then delete the existing PVC (if it's stuck) and let it recreate:

kubectl delete pvc <your-pvc-name>

Restart your pod or deployment to trigger PVC re-creation.

4. Describe the PVC for Errors

Run:

kubectl describe pvc <your-pvc-name>

Check if there are any error messages indicating why the claim is not being bound.

5. If No Dynamic Provisioner Exists

If Minikube does not have a dynamic storage provisioner, you may need to manually create a PersistentVolume. Example:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"  # Path on Minikube node

Apply it with:

kubectl apply -f pv.yaml

Then ensure your PVC requests match the storage, accessModes, and storageClassName of this PV.

6. Restart Pod to Retry Scheduling

If everything is set up correctly but the pod is still failing, try:

kubectl delete pod <pod-name>
kubectl apply -f your-deployment.yaml

or restart the entire Minikube cluster:

minikube delete
minikube start

Upvotes: 0

e7lT2P
e7lT2P

Reputation: 1941

K3s when installed, also downloads a storage class which makes it as default.

Check with kubectl get storageclass:

NAME        PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE       ALLOWVOLUMEEXPANSION   AGE 
local-path  rancher.io/local-path   Delete          WaitForFirstConsumer    false                  8s

K8s cluster on the other hand, does not download also a default storage class.

In order to solve the problem:

Upvotes: 10

Nero Vanbiervliet
Nero Vanbiervliet

Reputation: 938

In my case, I was using minikube and the volume I requested looked like this:

volume:
  storage: 20Gi
  className: managed-csi
  hostPath: false

Changing the volume type fixed it

volume:
  storage: 20Gi
  className: standard
  hostPath: true

Upvotes: 1

Taku
Taku

Reputation: 5937

This can also mean the underlying storage driver does not exist, for example, EBS driver is not ready.

Upvotes: 0

Alexandr Kovalenko
Alexandr Kovalenko

Reputation: 1051

I fixed this issue by doing these steps.

  1. Check what you have

kubectl get pvc

kubectl get pv

  1. Delete everything

kubectl delete pv your-name-pv

kubectl delete pvc your-name-pvc

  1. Create everything from scratch

Upvotes: 1

Ralle Mc Black
Ralle Mc Black

Reputation: 1203

You need to create a PV in order to get a PVC bound. If you want the PVs automatically created from PVC claims you need a Provisioner installed in your Cluster.

First create a PV with at least the amout of space need by your PVC. Then you can apply your deployment yaml which contains the PVC claim.

Upvotes: 9

Related Questions