Reputation: 3892
I recently created a cluster on EKS with eksctl. kubectl logs -f mypod-0
bumps into Authorization error:
Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)
Any advice and insight is appreciated
Upvotes: 18
Views: 29372
Reputation: 369
I experienced this error in my AWS EKS cluster when the DNS servers configured in the DHCP Options Set for the VPC containing the EKS cluster became unreachable. When DNS is unreachable the authorization requests fail to access the authorization source.
Upvotes: 0
Reputation: 2681
I could solve this issue by editing the aws-auth
configmap. I added the clusterrole system:node
in the worker role.
apiVersion: v1
data:
mapRoles: |
- rolearn: 'WORKER ROLE'
username: 'NAME'
groups:
- ...
- system:nodes
Upvotes: 1
Reputation: 2037
On a prem cluster, I had an issue where I changed the DNS address of the master. You will need to change the dns name in the /etc/kubernetes/kubelet.conf
on each node then sudo systemctl restart kublet.service.
Upvotes: 2
Reputation: 1
This may happen if your aws-auth config map is broken / empty. And it may happen if, for example, you run multiple eksctl operations in parallel.
Upvotes: 0
Reputation: 469
You would need to create a ClusterRoleBinding with a Role pointing towards the user : kube-apiserver-kubelet-client
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubelet-api-admin
subjects:
- kind: User
name: kube-apiserver-kubelet-client
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:kubelet-api-admin
apiGroup: rbac.authorization.k8s.io
kubelet-api-admin is usually a role that has the necessary permissions, but you can replace this with an apt role.
Upvotes: 10