Reputation: 1
scheduler as custom scheduler by https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/. And, I bind my ServiceAccount kube-system:my-scheduler with ClusterRole cluster-admin. My my-scheduler.yaml shows next.
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-scheduler-as-kube-scheduler
subjects:
- kind: ServiceAccount
name: my-scheduler
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
And, my-scheduler pod can successfully run. When I deploy pods that I schedule with the .spec.schedulerName set to my-scheduler, the pod wasn't scheduled. And the pod is pending status. I checked the my-scheduler pod log using kubectl logs -f my-scheduler-8699f6f86-vvn5p -n kube-system command. And the error info describes as following. I don't know why my-scheduler Failed to watch *v1.CSIStorageCapacity.
W1108 07:05:16.968419 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:149: failed to list *v1.CSIStorageCapacity: the server could not find the requested resource
E1108 07:05:16.968443 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:149: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: the server could not find the requested resource
Can anyone figure out the solution to the problem?
Upvotes: 0
Views: 346
Reputation: 1
The versions of k8s components in my k8s cluster are shown next. I am sure that it's not incompatibleness between k8s components to result the problem.
[root@compileHost pluginsDir]# kubectl get nodes -o yaml | grep -i 'apiserver'
- registry.aliyuncs.com/google_containers/kube-apiserver@sha256:d10db42c2353539ce15006854edfb6707ba6025f282d59d962729ed3b6039004
- registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
[root@compileHost pluginsDir]# kubectl get nodes -o yaml | grep -i 'kubelet'
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
message: kubelet is posting ready status
reason: KubeletReady
kubeletEndpoint:
kubeletVersion: v1.23.0
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
message: kubelet is posting ready status
reason: KubeletReady
kubeletEndpoint:
kubeletVersion: v1.23.0
[root@compileHost pluginsDir]# kubectl get nodes -o yaml | grep -i 'kube-controller-manager'
- registry.aliyuncs.com/google_containers/kube-controller-manager@sha256:0bfbb13e5e9cec329523b6f654687af8ce058adbc90b42e5af7a929ac22e2a53
- registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
[root@compileHost pluginsDir]# kubectl get nodes -o yaml | grep -i 'kube-scheduler'
- k8s.gcr.io/scheduler-plugins/kube-scheduler@sha256:880fb6f6b4bfa0d229d317c2f223313e3334ac013b6697a797bb16584d57f7c7
- k8s.gcr.io/scheduler-plugins/kube-scheduler:v0.23.10
- localhost:5000/scheduler-plugins/kube-scheduler:latest
- registry.aliyuncs.com/google_containers/kube-scheduler@sha256:af8166ce28baa7cb902a2c0d16da865d5d7c892fe1b41187fd4be78ec6291c23
- registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
- localhost:5000/scheduler-plugins/my-kube-scheduler:1.0
- k8s.gcr.io/scheduler-plugins/kube-scheduler@sha256:880fb6f6b4bfa0d229d317c2f223313e3334ac013b6697a797bb16584d57f7c7
- k8s.gcr.io/scheduler-plugins/kube-scheduler:v0.23.10
- localhost:5000/scheduler-plugins/kube-scheduler:latest
- registry.aliyuncs.com/google_containers/kube-scheduler@sha256:af8166ce28baa7cb902a2c0d16da865d5d7c892fe1b41187fd4be78ec6291c23
- registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
- localhost:5000/scheduler-plugins/my-kube-scheduler:1.0
Upvotes: 0