sope
sope

Reputation: 1821

Use RBD in kubernetes error

I follow the example to use rbd in kubernetes, but can not success. who can help me!! the error :

Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289702    1254 volumes.go:114] Could not create volume builder for pod 5df3610e-86c8-11e5-bc34-002590fdf95c: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched
Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289770    1254 kubelet.go:1210] Unable to mount volumes for pod "rbd2_default": can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched; skipping pod
Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.299458    1254 pod_workers.go:111] Error syncing pod 5df3610e-86c8-11e5-bc34-002590fdf95c, skipping: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched

And The template file I used rbd-with-secret.json:

core@core-1-94 ~/kubernetes/examples/rbd $ cat rbd-with-secret.json
{
"apiVersion": "v1",
"id": "rbdpd2",
"kind": "Pod",
"metadata": {
    "name": "rbd2"
},
"spec": {
    "nodeSelector": {"kubernetes.io/hostname" :"10.12.1.97"},
    "containers": [
        {
            "name": "rbd-rw",
            "image": "kubernetes/pause",
            "volumeMounts": [
                {
                    "mountPath": "/mnt/rbd",
                    "name": "rbdpd"
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "rbdpd",
            "rbd": {
                "monitors": [
            "10.14.1.33:6789",
        "10.14.1.35:6789",
            "10.14.1.36:6789"
            ],
                "pool": "rbd",
                "image": "foo",
                "user": "admin",
                "secretRef": {"name": "ceph-secret"},
                "fsType": "ext4",
                "readOnly": true
            }
        }
    ]
}
}

The secret:

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFBemV6bFdZTXdXQWhBQThxeG1IT2NKa0QrYnE0K3RZUmtsVncK

the ceph config is in /etc/ceph/

core@core-1-94 ~/kubernetes/examples/rbd $ ls -alh /etc/ceph
total 20K
drwxr-xr-x  2 root root 4.0K Nov  6 18:38 .
drwxr-xr-x 26 root root 4.0K Nov  9 17:07 ..
-rw-------  1 root root   63 Nov  4 11:27 ceph.client.admin.keyring
-rw-r--r--  1 root root  264 Nov  6 18:38 ceph.conf
-rw-r--r--  1 root root  384 Nov  6 14:35 ceph.conf.orig
-rw-------  1 root root    0 Nov  4 11:27 tmpkqDKwf

and the key as :

    core@core-1-94 ~/kubernetes/examples/rbd $ sudo cat              
    /etc/ceph/ceph.client.admin.keyring
    [client.admin]
          key = AQAzezlWYMwWAhAA8qxmHOcJkD+bq4+tYRklVw==

Upvotes: 1

Views: 596

Answers (1)

briangrant
briangrant

Reputation: 845

You'll get "no volume plugins matched" if the rbd command isn't installed and in the path.

As the example specifies, you need to ensure that ceph is installed on your Kubernetes nodes. For instance, in Fedora: $ sudo yum -y install ceph-common

I'll file an issue to clarify the error messages.

Upvotes: 1

Related Questions