fledgling
fledgling

Reputation: 1051

kubectl error You must be logged in to the server (Unauthorized) - EKS cluster

I am new to EKS and Kubernetes -

Here is what happened

  1. A EKS cluster was created with a specific IAM Role
  2. When trying to connect to the cluster with kubectl commands it was throwing

error You must be logged in to the server (Unauthorized)

I followed the steps detailed here

https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/

  1. Assumed to the role that created the EKS cluster

  2. Exported them to new profile dev in aws credentials

  3. Ran AWS_PROFILE=dev kubectl get nodes. It was able to list all my nodes.

Note: I had already run aws eks --region <region> update-kubeconfig --name <cluster-name>

  1. Now I tried to add the role/SAML User that is trying to access the EKS cluster by applying the configmap as below and ran AWS_PROFILE=dev kubectl apply -f aws-auth.yaml

aws-auth.yaml being

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:sts::******:assumed-role/aws_dev/[email protected]
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes

notice the role arn is a SAML User assumed to aws_dev role that tries to connect to the cluster.

Once this was applied, the response was configmap/aws-auth configured

I now tried to execute kubectl get nodes without the AWS_PROFILE=dev and it fails again with error You must be logged in to the server (Unauthorized).

I also executed AWS_PROFILE=dev kubectl get nodes which previously worked but fails now.

I am guessing the aws-auth information messed up and is there a way to revert the kubectl apply that was done above.

any kubectl command fails now. What might be happening? How can I rectify this?

Upvotes: 2

Views: 5746

Answers (2)

char
char

Reputation: 2147

Recreate the cluster and when you get to step 6 in the link add a second role (or user) to your aws-auth.yaml, like this:

  1. Get ConfigMap with kubectl get cm -n kube-system aws-auth -o yaml
  2. Add your role as a second item to the ConfigMap (don't change the first one):
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:sts::******:assumed-role/aws_dev/[email protected]
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
    ### Add only this (assuming you're using a role)
    - rolearn: <ARN of your IAM role>
      username: <any name>
      groups:
        - system:masters
  1. Run AWS_PROFILE=dev kubectl apply -f aws-auth.yaml
  2. Then get the kubeconfig with your temporary IAM role credentials with aws eks --region <region> update-kubeconfig --name <cluster-name>

You probably changed the aws-auth config. Generally when you create a cluster, the user (or role) who created that cluster has admin rights, when you switch users you need to add them to the config (done as the user who created the cluster).

Upvotes: 1

Arghya Sadhu
Arghya Sadhu

Reputation: 44687

You get an authorization error when your AWS Identity and Access Management (IAM) entity isn't authorized by the role-based access control (RBAC) configuration of the Amazon EKS cluster. This happens when the Amazon EKS cluster is created by an IAM user or role that's different from the one used by aws-iam-authenticator.

Check the resolution here.

kubectl error You must be logged in to the server (Unauthorized) when accessing EKS cluster

Upvotes: 2

Related Questions