Reputation: 1630
I have an AKS cluster configured with enableAzureRBAC=true
I am trying to install the ingress-nginx Helm chart through Flux
It throws the error
reconciliation failed: failed to get last release revision: query: failed to query with labels: secrets is forbidden: User "system:serviceaccount:nginx:flux-applier" cannot list resource "secrets" in API group "" in the namespace "default": Azure does not have opinion for this user.
I can see that flux sets up a clusterrolebinding to make the flux-applier a cluster admin, which I have verified is in place
Name: flux-applier-binding
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount flux-applier flux-system
So I assume my issue is that Azure doesn't recognize this ServiceAccount and it isn't falling back to built in roles?
https://github.com/kubeguard/guard/blob/master/authz/providers/azure/rbac/checkaccessreqhelper.go
The Azure docs on Azure RBAC for AKS clearly state:
If the identity making the request exists in Azure AD, Azure will team with Kubernetes RBAC to authorize the request. If the identity exists outside of Azure AD (i.e., a Kubernetes service account), authorization will defer to the normal Kubernetes RBAC.
https://learn.microsoft.com/en-us/azure/aks/concepts-identity
But this doesn't seem to be true? Or maybe Flux is doing something strange with ServiceAccounts? I say this because there is no flux-applier service account in the default namespace, only in the flux-system namespace. Yet if I assign cluster-admin to that "ghost" service account through Kubectl things start working.
kubectl create clusterrolebinding flux-nginx-cluster-admin --clusterrole=cluster-admin --serviceaccount=nginx:flux-applier
I'd like to avoid having to do this though, doesn't seem like something that should be my responsibility.
Upvotes: 1
Views: 1126
Reputation: 9604
The other answer on this question is walking you through what you already found. You have to set a ClusterRoleBinding
to allow the flux system the admin permissions it needs to set up your infrastructure on that namespace.
It seems better to explicitly grant permissions to unknown namespaces rather than every service account created called flux-applier
on every possible future namespace automatically get cluster-admin
god permissions.
Upvotes: 0
Reputation: 753
I tried to reproduce the same issue in my environment and got the below results
I have created the AKS cluster configured with RABC
Use this link for reference files
Created the ClusterRole to grant the read access to secrets in any particular namespaces
vi clusterrole.yaml
kubectal apply -f clusterrole.yaml
I have created the role binding to grant the pod-reader with in the namespace and deployed the file
vi rolebinding.yaml
kubectl apply -f rolebinding.yml
Created the cluster binding role to grant the permissions to whole cluster
vi clusterrolebinding.yaml
kubectl apply -f clusterrolebinding.yaml
Created the rolebinding to grant the permissions in the admin cluster role to a user
kubectl create rolebinding bob-admin-binding --clusterrole=admin --user=bob --namespace=namespace_name
To access the service account in the ClusterRole use the below command
kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=acme:myapp --namespace=namespace_name
To grant the cluster role for entire cluster use the below command
kubectl create clusterrolebinding root-cluster-admin-binding --clusterrole=cluster-admin --user=root
I have created the namespace called nginx-ingress-controller using below command
kubectl create namespace nginx-ingress-controller
I have define the helm using flux
vi helmrepository-bitnami.yaml
kubectl apply -f helmrepository-bitnami.yaml
I have created customization in my flux repo to deploy the file
vi kustomization-nginx-ingress-controller.yaml
kubectl apply -f kustomization-nginx-ingress-controller.yaml
I have created the config map file
vi configmap.yaml
kubectl apply -f configmap.yaml
Till now I have created the helm repo into the nginx controller into the cluster through flux
I have deployed the nginx ingress controller
kubectl get pods
kubectl get services
Here I am able to see all the pods and services which i have created
After granting the permissions to the cluster, we have to update or create the auth reconcile permissions By using the below commands it will create or update the missing fields or remove the extra permission
kubectl auth reconcile -f my-rbac-rules.yaml --dry-run=client
kubectl auth reconcile -f my-rbac-rules.yaml
kubectl auth reconcile -f my-rbac-rules.yaml --remove-extra-subjects --remove-extra-permissions
I have granted the particular roles to the particular Service Accounts as needed We have use the below command to grant the permission for the default service account
kubectl create rolebinding default-view \
--clusterrole=view \
--serviceaccount=my-namespace:default \
--namespace=namespace_name
Note: If the azure doesn't recognize the service account we have to give the permissions for default service
For more information refer this url
Upvotes: 0