Reputation: 727
I created the eks cluster trying to connect it with local cli, for that, I installed the aws-cli and also provide the right 'aws configure' credentials. The user which I am using to connect with the aws have the EKS related policy. Still I am getting the following Error ...
E0209 21:09:44.893284 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
E0209 21:09:45.571635 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
E0209 21:09:46.380542 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
E0209 21:09:47.105407 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
E0209 21:09:47.869614 2465691 memcache.go:238] couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Upvotes: 46
Views: 150294
Reputation: 3548
My problem got solved by removing the aws_session_token
in the creds file.
[default]
aws_access_key_id = ...
aws_secret_access_key = ...
# aws_session_token = ...
Upvotes: 0
Reputation: 1611
Well in my case, the aws keys with which I created the cluster and with which I configured the kubectl were different. The two of them were different aws identities.
Upvotes: 15
Reputation: 1928
If you are getting this error when accessing your k3s cluster then you need to update your ~/.kube/config
with the latest in the cluster.
To copy your kube config
locally so that you can access your Kubernetes cluster run:
scp debian@master_ip:/etc/rancher/k3s/k3s.yaml ~/.kube/config
If you get file Permission denied, go into the node and temporarly run:
sudo chmod 777 /etc/rancher/k3s/k3s.yaml
Then copy with the scp command and reset the permissions back to:
sudo chmod 600 /etc/rancher/k3s/k3s.yaml
You'll then want to modify the config to point to master IP by running:
sudo nano ~/.kube/config
Then change server: https://127.0.0.1:6443
to match your master IP: server: https://<master_ip>:6443
Upvotes: 0
Reputation: 782
Following Irshad Nizami's answer I have found out that my user doesn't have permissions to the cluster at all. I was following the default EKS setup from Terraform docs.
What I was missing was the following entries:
data "aws_iam_user" "jan_tyminski" {
user_name = "tymik.me"
}
resource "aws_eks_access_entry" "jan_tyminski" {
cluster_name = aws_eks_cluster.my_cluster.name
principal_arn = data.aws_iam_user.jan_tyminski.arn
type = "STANDARD"
}
resource "aws_eks_access_policy_association" "jan_tyminski_AmazonEKSAdminPolicy" {
cluster_name = aws_eks_cluster.my_cluster.name
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
principal_arn = aws_eks_access_entry.jan_tyminski.principal_arn
access_scope {
type = "cluster"
}
}
resource "aws_eks_access_policy_association" "jan_tyminski_AmazonEKSAdminPolicy" {
cluster_name = aws_eks_cluster.my_cluster.name
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
principal_arn = aws_eks_access_entry.jan_tyminski.principal_arn
access_scope {
type = "cluster"
}
}
resource "aws_eks_access_policy_association" "jan_tyminski_AmazonEKSClusterAdminPolicy" {
cluster_name = aws_eks_cluster.my_cluster.name
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
principal_arn = aws_eks_access_entry.jan_tyminski.principal_arn
access_scope {
type = "cluster"
}
}
Of course my user - tymik.me
is just an example - amend the aws_iam_user
data source to use your user you are using to access the cluster.
It doesn't necessarily have to be the same user as the one that created the cluster, as long as the user gets the permissions.
I believe this answer adds some new light on the issue, as if you create the cluster by a different way than clicking through the AWS Console, then answers to use the same user as the one that created the cluster may not really be helpful - as it was in my case.
Note: I used AmazonEKSAdminPolicy
and AmazonEKSClusterAdminPolicy
because I am working on a PoC and cluster permissions is not a part I want to worry about in this scenario - keep in mind that you should use appropriate policy for your scenario.
Upvotes: 2
Reputation: 21
What fixed the issue for me is this:
In aws console, go to EKS,and create access entry (of the type Standard) for the user - > first add AmazonEKSAdminViewPolicy and then test you are able to run basic view commands such as kubectl get svc. Then come back to console and in same access entry, edit to also add AmazonEKSClusterAdminPolicy. Sometimes, creation of access policies is hindered by errors if you try to add multiple policies at same time, so go sequentially.
Upvotes: 2
Reputation: 41
In my case, I created the cluster using the root account but my AWS CLI was configured with some other account that was not root. You need to have the same account configured in AWS CLI as the one that was used to create the cluster.
I didn't want to use the add the root user to AWS CLI so in my case I solved the error by adding the IAM user in my AWS CLI to the EKS > Clusters > YOUR_CLUSTER > Access > IAM access entries section.
Upvotes: 4
Reputation: 188
I managed to resolve the same problem by granting public API server endpoint access (note: be aware of doing it in production environment).
If you are using AWS console: Go to the cluster network tab and select manage endpoint access.
If you are using terraform:
Set the the terraform module input cluster_endpoint_public_access
as true
As explained in AWS official documentation aws doc, to allow kubectl to connect with EKS cluster, we will need to use allowed network, so either we enter the VPC that EKS cluster located in, or we allow public access and set allowed CIDR block.
Upvotes: 1
Reputation: 1504
I was working with a different AWS account than usual, so I set the environment variables:
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
I do not need them to access other AWS accounts from my computer - credentials are managed differently, but I had to remove those variables to make the usual connection work again.
In similar scenarios you may need to:
Upvotes: 8
Reputation: 4147
In my case, I got a similar error by having the "default" systemd cgroup driver (that is Kubernetes' default nowadays). However, if you don't use docker but containerd + runc, this will default to using cgroupfs cgroup driver. This difference will not show up as an actual error BUT it will lead to above error.
So basically I did not do THIS: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd
Upvotes: 0
Reputation: 51
You are probably not set to the correct AWS account where the relevant EKS is set.
Use "aws configure list" to verify you are connected to the correct profile (which is probably not correct).
Use "aws configure" to set the correct account. Or use relevant AWS env parameters instead.
Upvotes: 0
Reputation: 4984
My problem got solved by removing cli_auto_prompt
in the AWS profile
vi ~/.aws/config
[default]
region = us-west-2
# cli_auto_prompt = on
[profile <X>]
region = us-west-2
# cli_auto_prompt = on
Also, make sure to update the kubeconfig
one more time after the above change. Please be sure to use the correct cluster name as well as region and also make sure the logged in user in your CLI do have admin permissions on EKS RBACK.
aws eks update-kubeconfig --name <EKS_CLUSTER_NAME> --region us-west-2
Upvotes: 2
Reputation: 83
The same error happened to me on k3d. Seems like the certificates were expired. I tried this and it worked
k3d kubeconfig get <name_of_cluster>
k3d kubeconfig merge <name_of_cluster> -d –u
k3d cluster stop <name_of_cluster>
k3d cluster start <name_of_cluster>
Upvotes: -4