Reputation: 1051
I am new to EKS and Kubernetes -
Here is what happened
error You must be logged in to the server (Unauthorized)
I followed the steps detailed here
https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/
Assumed to the role that created the EKS cluster
Exported them to new profile dev
in aws credentials
Ran AWS_PROFILE=dev kubectl get nodes
. It was able to list all my nodes.
Note: I had already run aws eks --region <region> update-kubeconfig --name <cluster-name>
AWS_PROFILE=dev kubectl apply -f aws-auth.yaml
aws-auth.yaml
being
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:sts::******:assumed-role/aws_dev/[email protected]
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
notice the role arn is a SAML User assumed to aws_dev
role that tries to connect to the cluster.
Once this was applied, the response was configmap/aws-auth configured
I now tried to execute kubectl get nodes
without the AWS_PROFILE=dev
and it fails again with error You must be logged in to the server (Unauthorized)
.
I also executed AWS_PROFILE=dev kubectl get nodes
which previously worked but fails now.
I am guessing the aws-auth information messed up and is there a way to revert the kubectl apply
that was done above.
any kubectl command fails now. What might be happening? How can I rectify this?
Upvotes: 2
Views: 5746
Reputation: 2147
Recreate the cluster and when you get to step 6 in the link add a second role (or user) to your aws-auth.yaml, like this:
kubectl get cm -n kube-system aws-auth -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:sts::******:assumed-role/aws_dev/[email protected]
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
### Add only this (assuming you're using a role)
- rolearn: <ARN of your IAM role>
username: <any name>
groups:
- system:masters
AWS_PROFILE=dev kubectl apply -f aws-auth.yaml
aws eks --region <region> update-kubeconfig --name <cluster-name>
You probably changed the aws-auth config. Generally when you create a cluster, the user (or role) who created that cluster has admin rights, when you switch users you need to add them to the config (done as the user who created the cluster).
Upvotes: 1
Reputation: 44687
You get an authorization error when your AWS Identity and Access Management (IAM) entity isn't authorized by the role-based access control (RBAC) configuration of the Amazon EKS cluster. This happens when the Amazon EKS cluster is created by an IAM user or role that's different from the one used by aws-iam-authenticator.
Check the resolution here.
kubectl error You must be logged in to the server (Unauthorized) when accessing EKS cluster
Upvotes: 2