Reputation: 949
How to access Kubernetes worker node labels from the container/pod running in the cluster? Labels are set on the worker node as the yaml output of this kubectl command launched against this Azure AKS worker node shows :
$ kubectl get nodes aks-agentpool-39829229-vmss000000 -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2021-10-15T16:09:20Z"
labels:
agentpool: agentpool
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: Standard_DS2_v2
beta.kubernetes.io/os: linux
failure-domain.beta.kubernetes.io/region: eastus
failure-domain.beta.kubernetes.io/zone: eastus-1
kubernetes.azure.com/agentpool: agentpool
kubernetes.azure.com/cluster: xxxx
kubernetes.azure.com/mode: system
kubernetes.azure.com/node-image-version: AKSUbuntu-1804gen2containerd-2021.10.02
kubernetes.azure.com/os-sku: Ubuntu
kubernetes.azure.com/role: agent
kubernetes.azure.com/storageprofile: managed
kubernetes.azure.com/storagetier: Premium_LRS
kubernetes.io/arch: amd64
kubernetes.io/hostname: aks-agentpool-39829229-vmss000000
kubernetes.io/os: linux
kubernetes.io/role: agent
node-role.kubernetes.io/agent: ""
node.kubernetes.io/instance-type: Standard_DS2_v2
storageprofile: managed
storagetier: Premium_LRS
topology.kubernetes.io/region: eastus
topology.kubernetes.io/zone: eastus-1
name: aks-agentpool-39829229-vmss000000
resourceVersion: "233717"
selfLink: /api/v1/nodes/aks-agentpool-39829229-vmss000000
uid: 0241eb22-4d1b-4d65-870f-fcc51dac1c70
Note: The pod/Container that I have is running with non-root access and it doesn't have a privileged user.
Is there a way to access these labels from the worker node itself ?
Upvotes: 0
Views: 1179
Reputation: 1210
In the AKS cluster,
Create a namespace like:
kubectl create ns get-labels
Create a Service Account in the namespace like:
kubectl create sa get-labels -n get-labels
Create a Clusterrole like:
kubectl create clusterrole get-labels-clusterrole --resource=nodes --verb=get,list
Create a Rolebinding like:
kubectl create clusterrolebinding get-labels-rolebinding -n get-labels --clusterrole get-labels-clusterrole --serviceaccount get-labels:get-labels
Run a pod in the namespace you craeted like:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: get-labels
namespace: get-labels
spec:
serviceAccountName: get-labels
containers:
- image: centos:7
name: get-labels
command:
- /bin/bash
- -c
- tail -f /dev/null
EOF
Execute a shell in the running container like:
kubectl exec -it get-labels -n get-labels -- bash
Install jq
tool in the container:
yum install epel-release -y && yum update -y && yum install jq -y
Set up shell variables:
# API Server Address
APISERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
If you want to get a list of all nodes and their corresponding labels, then use the following command:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/nodes | jq '.items[].metadata | {name,labels}'
else, if you want the labels corresponding to a particular node then use:
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/nodes/<nodename> | jq '.metadata.labels'
Please replace <nodename>
with the name of node intended.
N.B. You can choose to include the installation of the jq
tool in the Dockerfile from which your container image is built and make use of environment variables for the shell variables. We have used neither in this answer in order to explain the working of this method.
Upvotes: 1