Reputation: 1336
I'm experiencing a strange behavior from newly created Kubernetes service accounts. It appears that their tokens provide limitless access permissions in our cluster.
If I create a new namespace, a new service account inside that namespace, and then use the service account's token in a new kube config, I am able to perform all actions in the cluster.
# SERVER is the only variable you'll need to change to replicate on your own cluster
SERVER=https://k8s-api.example.com
NAMESPACE=test-namespace
SERVICE_ACCOUNT=test-sa
# Create a new namespace and service account
kubectl create namespace "${NAMESPACE}"
kubectl create serviceaccount -n "${NAMESPACE}" "${SERVICE_ACCOUNT}"
SECRET_NAME=$(kubectl get serviceaccount "${SERVICE_ACCOUNT}" --namespace=test-namespace -o jsonpath='{.secrets[*].name}')
CA=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.ca\.crt}')
TOKEN=$(kubectl get secret -n "${NAMESPACE}" "${SECRET_NAME}" -o jsonpath='{.data.token}' | base64 --decode)
# Create the config file using the certificate authority and token from the newly created
# service account
echo "
apiVersion: v1
kind: Config
clusters:
- name: test-cluster
cluster:
certificate-authority-data: ${CA}
server: ${SERVER}
contexts:
- name: test-context
context:
cluster: test-cluster
namespace: ${NAMESPACE}
user: ${SERVICE_ACCOUNT}
current-context: test-context
users:
- name: ${SERVICE_ACCOUNT}
user:
token: ${TOKEN}
" > config
Running that ^ as a shell script yields a config
in the current directory. The problem is, using that file, I'm able to read and edit all resources in the cluster. I'd like the newly created service account to have no permissions unless I explicitly grant them via RBAC.
# All pods are shown, including kube-system pods
KUBECONFIG=./config kubectl get pods --all-namespaces
# And I can edit any of them
KUBECONFIG=./config kubectl edit pods -n kube-system some-pod
I haven't added any role bindings to the newly created service account, so I would expect it to receive access denied responses for all kubectl
queries using the newly generated config.
Below is an example of the test-sa service account's JWT that's embedded in config
:
{
"iss": "kubernetes/serviceaccount",
"kubernetes.io/serviceaccount/namespace": "test-namespace",
"kubernetes.io/serviceaccount/secret.name": "test-sa-token-fpfb4",
"kubernetes.io/serviceaccount/service-account.name": "test-sa",
"kubernetes.io/serviceaccount/service-account.uid": "7d2ecd36-b709-4299-9ec9-b3a0d754c770",
"sub": "system:serviceaccount:test-namespace:test-sa"
}
Things to consider...
rbac.authorization.k8s.io/v1
and
rbac.authorization.k8s.io/v1beta1
in the output of kubectl api-versions | grep rbac
as suggested in this post. It is notable that kubectl cluster-info dump | grep authorization-mode
, as suggested in another answer to the same question, doesn't show output. Could this suggest RBAC isn't actually enabled?cluster-admin
role privileges, but I would not expect those to carry over to service accounts created with it.Am I correct in my assumption that newly created service accounts should have extremely limited cluster access, and the above scenario shouldn't be possible without permissive role bindings being attached to the new service account? Any thoughts on what's going on here, or ways I can restrict the access of test-sa?
Upvotes: 3
Views: 3247
Reputation: 1336
It turns out an overly permissive cluster-admin
ClusterRoleBinding
was bound to the system:serviceaccounts
group. This resulted in all service accounts in our cluster having cluster-admin
privileges.
It seems like somewhere early on in the cluster's life the following ClusterRoleBinding
was created:
kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts
WARNING: Never apply this rule to your cluster ☝️
We have since removed this overly permissive rule and rightsized all service account permissions.
Thank you to the folks that provided useful answers and comments to this question. They were helpful in determining this issue. This was a very dangerous RBAC configuration and we are pleased to have it resolved.
Upvotes: 2
Reputation: 6577
I could not reproduce your issue on three different K8S versions in my test lab (including v1.15.3, v1.14.10-gke.17, v1.11.7-gke.12 - with basic auth enabled).
Unfortunately token based log-in activities are not recorded in AuditLogs of Cloud Logs console for GKE clusters :(.
To my knowledge only data-access operations, that go through Google Cloud are recorded (AIM based = kubectl
using google auth
provider).
If your "test-sa"
service account is somehow permitted to do specific operation by RBAC, I would still try to study Audit Logs of your GKE cluster. Maybe somehow your service account is being mapped to google service account one, and thus authorized.
You can always contact official support channel of GCP, to troubleshot further your unusual case.
Upvotes: 2
Reputation: 44569
You can check the permission of the service account by running command
kubectl auth can-i --list --as=system:serviceaccount:test-namespace:test-sa
If you see below output that's the very limited permission by default a service account gets.
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
Upvotes: 3