combivel
combivel

Reputation: 11

k8s - Cinder "0/x nodes are available: x node(s) had volume node affinity conflict"

I have my own cluster k8s. I'm trying to link the cluster to openstack / cinder.

When I'm creating a PVC, I can a see a PV in k8s and the volume in Openstack. But when I'm linking a pod with the PVC, I have the message k8s - Cinder "0/x nodes are available: x node(s) had volume node affinity conflict".

My yml test:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: classic
provisioner: kubernetes.io/cinder
parameters:
  type: classic

---


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-infra-consuldata4
  namespace: infra
spec:
  storageClassName: classic
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: consul
  namespace: infra
  labels:
    app: consul
spec:
  replicas: 1
  selector:
    matchLabels:
      app: consul
  template:
    metadata:
      labels:
        app: consul
    spec:
      containers:
      - name: consul
        image: consul:1.4.3
        volumeMounts:
        - name: data
          mountPath: /consul
        resources:
          requests:
            cpu: 100m
          limits:
            cpu: 500m
        command: ["consul", "agent", "-server", "-bootstrap", "-ui", "-bind", "0.0.0.0", "-client", "0.0.0.0", "-data-dir", "/consul"]
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-infra-consuldata4

The result:

kpro describe pvc -n infra
Name:          pvc-infra-consuldata4
Namespace:     infra
StorageClass:  classic
Status:        Bound
Volume:        pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Labels:        
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"pvc-infra-consuldata4","namespace":"infra"},"spec":...
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age   From                         Message
  ----       ------                 ----  ----                         -------
  Normal     ProvisioningSucceeded  61s   persistentvolume-controller  Successfully provisioned volume pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c using kubernetes.io/cinder
Mounted By:  consul-85684dd7fc-j84v7
kpro describe po -n infra consul-85684dd7fc-j84v7
Name:               consul-85684dd7fc-j84v7
Namespace:          infra
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=consul
                    pod-template-hash=85684dd7fc
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/consul-85684dd7fc
Containers:
  consul:
    Image:      consul:1.4.3
    Port:       <none>
    Host Port:  <none>
    Command:
      consul
      agent
      -server
      -bootstrap
      -ui
      -bind
      0.0.0.0
      -client
      0.0.0.0
      -data-dir
      /consul
    Limits:
      cpu:  2
    Requests:
      cpu:        500m
    Environment:  <none>
    Mounts:
      /consul from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-nxchv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-infra-consuldata4
    ReadOnly:   false
  default-token-nxchv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-nxchv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  36s (x6 over 2m40s)  default-scheduler  0/6 nodes are available: 6 node(s) had volume node affinity conflict. 

Why K8s successful create the Cinder volume, but it can't schedule the pod ?

Upvotes: 1

Views: 6325

Answers (2)

ErnieAndBert
ErnieAndBert

Reputation: 1702

I also ran into this issue when I forgot to deploy the EBS CSI driver before I tried to get my pod to connect to it.

kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"

Upvotes: 1

Yuci
Yuci

Reputation: 30079

Try finding out the nodeAffinity of your persistent volume:

$ kubctl describe pv pvc-76bfdaf1-40bb-11e9-98de-fa163e53311c
Node Affinity:     
  Required Terms:  
    Term 0:        kubernetes.io/hostname in [xxx]

Then try to figure out if xxx matches the node label yyy that your pod is supposed to run on:

$ kubectl get nodes
NAME      STATUS   ROLES               AGE   VERSION
yyy       Ready    worker              8d    v1.15.3

If they don't match, you will have the "x node(s) had volume node affinity conflict" error, and you need to re-create the persistent volume with the correct nodeAffinity configuration.

Upvotes: 3

Related Questions