Peter Penzov
Peter Penzov

Reputation: 1682

flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"

I'm trying to install Kubernetes with dashboard but I get the following issue:

test@ubuntukubernetes1:~$ kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS              RESTARTS         AGE
kube-flannel           kube-flannel-ds-ksc9n                        0/1     CrashLoopBackOff    14 (2m15s ago)   49m
kube-system            coredns-6d4b75cb6d-27m6b                     0/1     ContainerCreating   0                4h
kube-system            coredns-6d4b75cb6d-vrgtk                     0/1     ContainerCreating   0                4h
kube-system            etcd-ubuntukubernetes1                       1/1     Running             1 (106m ago)     4h
kube-system            kube-apiserver-ubuntukubernetes1             1/1     Running             1 (106m ago)     4h
kube-system            kube-controller-manager-ubuntukubernetes1    1/1     Running             1 (106m ago)     4h
kube-system            kube-proxy-6v8w6                             1/1     Running             1 (106m ago)     4h
kube-system            kube-scheduler-ubuntukubernetes1             1/1     Running             1 (106m ago)     4h
kubernetes-dashboard   dashboard-metrics-scraper-7bfdf779ff-dfn4q   0/1     Pending             0                48m
kubernetes-dashboard   dashboard-metrics-scraper-8c47d4b5d-9kh7h    0/1     Pending             0                73m
kubernetes-dashboard   kubernetes-dashboard-5676d8b865-q459s        0/1     Pending             0                73m
kubernetes-dashboard   kubernetes-dashboard-6cdd697d84-kqnxl        0/1     Pending             0                48m
test@ubuntukubernetes1:~$

Log files:

test@ubuntukubernetes1:~$ kubectl logs --namespace kube-flannel kube-flannel-ds-ksc9n
Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init)
I0808 23:40:17.324664       1 main.go:207] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: version:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true ifaceCanReach: subnetFile:/run/flannel/subnet.env publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true}
W0808 23:40:17.324753       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
E0808 23:40:17.547453       1 main.go:224] Failed to create SubnetManager: error retrieving pod spec for 'kube-flannel/kube-flannel-ds-ksc9n': pods "kube-flannel-ds-ksc9n" is forbidden: User "system:serviceaccount:kube-flannel:flannel" cannot get resource "pods" in API group "" in the namespace "kube-flannel"
test@ubuntukubernetes1:~$

Do you know how this issue can be solved? I tried the following installation:

Swapoff -a
Remove following line from /etc/fstab
/swap.img       none    swap    sw      0       0

sudo apt update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker

sudo apt install apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list
sudo mv ~/kubernetes.list /etc/apt/sources.list.d
sudo apt update
sudo apt install kubeadm kubelet kubectl kubernetes-cni

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml


kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

kubectl proxy --address 192.168.1.133 --accept-hosts '.*'

Can you advise?

Upvotes: 1

Views: 2359

Answers (3)

George G.
George G.

Reputation: 1

I tried to deploy a 3 node cluster with 1 master and 2 workers. I followed similar method as described above. Then tried to depploy Nginx but it failed. When I checked my pods, flannel on master was running but on the worker nodes it is failing.

I deleted flannel and started from beginning. First I just used only, since there was some mention that kube-flannel-rbac.yaml was causing issues.

ubuntu@master:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml namespace/kube-flannel created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created

ubuntu@master:~$ kubectl describe ClusterRoleBinding flannel Name: flannel Labels: Annotations: Role: Kind: ClusterRole Name: flannel Subjects: Kind Name Namespace


ServiceAccount flannel kube-flannel

Then I was able to create nginx image. However, I then delete image and applied the second yaml. This changed the namespace

ubuntu@master:~$ kubectl describe ClusterRoleBinding flannel Name: flannel Labels: Annotations: Role: Kind: ClusterRole Name: flannel Subjects: Kind Name Namespace


ServiceAccount flannel kube-system

and again the nginx was successful.

What is the purpose of this config? Is it needed since the image is being deployed with and without it?

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Upvotes: 0

h1pp13p373
h1pp13p373

Reputation: 41

I had the same situation on a new deployment today. Turns out, the kube-flannel-rbac.yml file had the wrong namespace. It's now 'kube-flannel', not 'kube-system', so I modified it and re-applied.

I also added a 'namespace' entry under each 'name' entry in kube-flannel.yml, except for under the roleRef heading. (it threw an error when I added it there) All pods came up as 'Running' after the new yml was applied.

Upvotes: 2

Adiii
Adiii

Reputation: 60074

Seems like the problem is with kube-flannel-rbac.yaml

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

it expecting a service account in the kube-system

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
  - kind: ServiceAccount
    name: flannel
    namespace: kube-system

so just delete this

kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

as the kube-flannel.yml already creating this in the right namespace.

https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml#L43

Upvotes: 0

Related Questions