Reputation: 321
Running into
Error from server (Forbidden): namespaces is forbidden: User "arn:aws:iam::xxxxx:user/xx" cannot list resource "namespaces" in API group "" at the cluster scope
When trying
kubectl get ns
I have tried this
Error from server (Forbidden): namespaces is forbidden - AWS EKS
https://repost.aws/knowledge-center/eks-kubernetes-object-access-error
The IAM role I'm trying to access the EKS cluster has the permission of
the EKS-admin has the permission of "Full Access" for all resources
My terraform has access to the ARN (example below)
manage_aws_auth_configmap = true
aws_auth_users = [
{
userarn = "arn:aws:iam::xxxxx:root"
username = "root"
groups = ["system:masters"]
},
{
userarn = "arn:aws:iam::xxxxx:user"
username = "xx"
groups = ["system:masters"]
},
]
aws_auth_accounts = [
"xxxxx"
]
and I also verified the cluster has ConfigMap by running this command on the amazon cloud shell
apiVersion: v1
data:
mapAccounts: |
- "xxxxx"
mapRoles: |
- "groups":
- "system:bootstrappers"
- "system:nodes"
"rolearn": "arn:aws:iam::xxxxx:role/backend-eks-node-group-xxxxxxxxxx"
"username": "system:node:{{EC2PrivateDNSName}}"
mapUsers: |
- "groups":
- "system:masters"
"userarn": "arn:aws:iam::xxxxx:root"
"username": "root"
- "groups":
- "system:masters"
"userarn": "arn:aws:iam::xxxxx:user"
"username": "xx"
kind: ConfigMap
metadata:
creationTimestamp: "2023-11-27T13:33:15Z"
name: aws-auth
namespace: kube-system
resourceVersion: "26180451"
uid: 13b72acd-5531-4fde-9a2c-43df456704e1
I also tried creating a new a new IAM user and giving full access and tried connecting to the EKS.
Upvotes: 2
Views: 2053
Reputation: 18203
The answer to this question is actually two-fold. The first part that is important to understand is that the permissions assigned to an IAM user/role are related to permissions in AWS. EKS permissions do not have anything to do with Kubernetes permissions. The former allow a user/role to interact with the AWS managed service (i.e, get the cluster information) when using AWS console/APIs/AWS CLI. The latter, i.e, Kubernetes permissions, allow you to interact with the cluster, but when using kubectl
. By using the ConfigMap, the connection between an IAM user/role and a Kubernetes Role/ClusterRole can be achieved. For that to work, you would have to create a Role/ClusterRole and bind it to a Kubernetes user/group. The way you can do that depends on your use case, but it can also be achieved with terraform.
resource "kubernetes_cluster_role_v1" "some_role" {
metadata {
name = "some-role-name"
}
rule {
api_groups = [""]
resources = ["namespaces"]
verbs = ["get", "list"]
}
}
resource "kubernetes_cluster_role_binding_v1" "some_role_binding" {
metadata {
name = "some-role-name-binding"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = kubernetes_cluster_role_v1.some_role.metadata[0].name
}
subject {
kind = "User"
name = "xx" # or whatever the user name is
}
}
This way, the ClusterRole gets created, and it will be bound to the group "system:masters"
with the EKS configuration and to the user you created:
aws_auth_users = [
{
userarn = "arn:aws:iam::xxxxx:root"
username = "root"
groups = ["system:masters"]
},
{
userarn = "arn:aws:iam::xxxxx:user"
username = "xx"
groups = ["system:masters"]
},
]
If you need to limit the access to a particular namespace, you could use the RoleBinding instead of ClusterRoleBinding. Alternatively, you could crate a Role and use RoleBinding if you need additional security. Also, the rules in this example allow only listing namespaces, so if needed additional rules can be added.
As a final step, you would need to update the kube config file by using the aws eks update-kubeconfig <options>
command.
NOTE: The above applies when the old way of EKS authentication is used (i.e ConfigMap
). The new way with using EKS API to authenticate solves some of the issues mentioned previously, first and foremost mapping between IAM users/roles, policies, and Kubernetes permissions.
Upvotes: 4