Kubectl command throwing error: Unable to connect to the server: getting credentials: exec: exit status 2

I am doing a lab setup of EKS/Kubectl and after the completion cluster build, I run the following:

> kubectl get node

And I get the following error:
Unable to connect to the server: getting credentials: exec: exit status 2

Moreover, I am sure it is a configuration issue for,

kubectl version
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:

create-cluster                           | delete-cluster                          
describe-cluster                         | describe-update                         
list-clusters                            | list-updates                            
update-cluster-config                    | update-cluster-version                  
update-kubeconfig                        | wait                                    
help                                    
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: getting credentials: exec: exit status 2

Please advise next steps for troubleshooting.

Upvotes: 17

Views: 101448

Answers (13)

Ashwin
Ashwin

Reputation: 53

Simply updating the kube config with eks command file worked for me.

aws eks --region ap-south-1 update-kubeconfig --name <cluster name> --profile <profile name>

Upvotes: 0

ruben210698
ruben210698

Reputation: 101

In Azure:

  1. Delete all contents of the ~/.kube/ folder

  2. Execute:

    sudo az aks install-cli

(https://learn.microsoft.com/en-us/answers/questions/1106601/aks-access-issue)

  1. Reconnect:

    az login

    az aks get-credentials -n {clustername} -g {resourcegroup}

It worked for me for Azure.

Upvotes: 0

Raymond
Raymond

Reputation: 173

I had the same error and solved it by upgrading my awscli to the latest version.

Upvotes: 0

elirandav
elirandav

Reputation: 2063

In my case, as I am using azure (not aws), I had to install "kubelogin" which resolved the issue.

"kubelogin" is a client-go credential (exec) plugin implementing azure authentication. This plugin provides features that are not available in kubectl. It is supported on kubectl v1.11+

Upvotes: 2

Theofilos Papapanagiotou
Theofilos Papapanagiotou

Reputation: 5599

In EKS you can retrieve your kubectl credentials using the following command:

% aws eks update-kubeconfig --name cluster_name
Updated context arn:aws:eks:eu-west-1:xxx:cluster/cluster_name in /Users/theofpa/.kube/config

You can retrieve your cluster name using:

% aws eks list-clusters
{
    "clusters": [
        "cluster_name"
    ]
}

Upvotes: 1

David Upegui
David Upegui

Reputation: 1

I had the same problem, the issue was that in my .aws/credentials file there was multiple users, and the user that had the permissions on the cluster of EKS (admin_test) wasn't the default user. So in my case, i made the "admin_test" user as my default user in the CLI using environment variables:

export $AWS_PROFILE='admin_test'

After that, i checked the default user with the command:

aws sts get-caller-identity

Finally, i was able to get the nodes with the kubectl get nodes command.

Reference: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Upvotes: 0

S Singh
S Singh

Reputation: 33

Make sure you have installed AWS CLI.

https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

Upvotes: 2

VivekDev
VivekDev

Reputation: 25349

For me running kubectl get nodes or kubectl cluster-info gives me the following error.

Unable to connect to the server: getting credentials: exec: executable kubelogin not found

It looks like you are trying to use a client-go credential plugin that is not installed.

To learn more about this feature, consult the documentation available at:
      https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins

kubectl get nodes giving error

I did the following to resolve this.

  1. Deleted all of the contents inside ~/.kube/. In my case, its a windows machine, so its C:\Users\nis.kube. Here nis is the user name that I logged into.

  2. Ran the get credentials command as follows.

    az aks get-credentials --resource-group terraform-aks-dev --name terraform-aks-dev-aks-cluster --admin

Note --admin in the end. Without it, its giving me the same error.

Now the above two commands are working.

Reference: https://blog.baeke.info/2021/06/03/a-quick-look-at-azure-kubelogin/

Upvotes: 3

wojteck
wojteck

Reputation: 69

You need to update/recreate your local kubeconfig. In my case I deleted the whole ~/.kube/config and followed this tutorial:

https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Upvotes: 0

Bradley Davis
Bradley Davis

Reputation: 17

Removing and adding the ~/.aws/credentials file worked to resolve this issue for me.

rm ~/.aws/credentials
touch ~/.aws/credentials

Upvotes: -9

Deepak Singhvi
Deepak Singhvi

Reputation: 787

Can you check your ~/.kube/config file?

Assume if you have start local cluster using minikube for that if your config is available, you should not be getting the error for server.

Sample config file


    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority: /Users/singhvi/.minikube/ca.crt
        server: https://127.0.0.1:32772
      name: minikube
    contexts:
    - context:
        cluster: minikube
        user: minikube
      name: minikube
    current-context: minikube
    kind: Config
    preferences: {}
    users:
    - name: minikube
      user:
        client-certificate: /Users/singhvi/.minikube/profiles/minikube/client.crt
        client-key: /Users/singhvi/.minikube/profiles/minikube/client.key

Upvotes: 1

selftaught91
selftaught91

Reputation: 7461

Please delete the cache folder folder present in

~/.aws/cli/cache

Upvotes: 10

BMW
BMW

Reputation: 45223

Did you have the kubectl configuration file ready?

Normally we put it under ~/.kube/config and the file includes the cluster endpoint, ceritifcate, contexts, admin users, and so on.

Furtherly, read this document: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Upvotes: 2

Related Questions