Reputation: 2266
I want to set k8s kube-proxy
config file permission for hardening purposes.
I'm wordering how the kube-proxy
process can be running with the --config
flag set to a path (var/lib/kube-proxy/config.conf
) that can't be found...
In fact checking kube-proxy
process gives this :
[centos@cpu-node0 ~]$ ps -ef | grep kube-proxy
root 20890 20872 0 Oct20 ? 00:19:23 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=cpu-node0
centos 55623 51112 0 14:44 pts/0 00:00:00 grep --color=auto kube-proxy
But the file /var/lib/kube-proxy/config.conf
does not exist :
[centos@cpu-node0 ~]$ ll /var/lib/kube-proxy/config.conf
ls: cannot access /var/lib/kube-proxy/config.conf: No such file or directory
Why?
Upvotes: 6
Views: 6600
Reputation: 11
For kube-proxy
by default configuration file is volume mounted in the form of configMap
to /var/lib/kube-proxy/kubeconfig.conf.
➜ ~ kubectl describe daemonset kube-proxy -n kube-system
Name: kube-proxy
Selector: k8s-app=kube-proxy
Node-Selector: kubernetes.io/os=linux
Labels: k8s-app=kube-proxy
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 2
Current Number of Nodes Scheduled: 2
Number of Nodes Scheduled with Up-to-date Pods: 2
Number of Nodes Scheduled with Available Pods: 2
Number of Nodes Misscheduled: 0
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: k8s-app=kube-proxy
Service Account: kube-proxy
Containers:
kube-proxy:
Image: registry.k8s.io/kube-proxy:v1.28.2
Port: <none>
Host Port: <none>
Command:
/usr/local/bin/kube-proxy
--config=/var/lib/kube-proxy/config.conf
--hostname-override=$(NODE_NAME)
Environment:
NODE_NAME: (v1:spec.nodeName)
Mounts:
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/var/lib/kube-proxy from kube-proxy (rw)
Volumes:
kube-proxy:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-proxy
Optional: false
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
Priority Class Name: system-node-critical
Events: <none>
If you check configMap
that mounted to kube-proxy
it will look like this :
➜ ~ kubectl describe configmap kube-proxy -n kube-system
Name: kube-proxy
Namespace: kube-system
Labels: app=kube-proxy
Annotations: kubeadm.kubernetes.io/component-config.hash: sha256:2294f8ea53531bb5c20e91dc4a8571b9ac47abd2c5ec99e58ce4d7b115418fdd
Data
====
config.conf:
----
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: ""
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
detectLocal:
bridgeInterface: ""
interfaceNamePrefix: ""
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
localhostNodePorts: null
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
verbosity: 0
metricsBindAddress: ""
mode: ""
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
winkernel:
enableDSR: false
forwardHealthCheckVip: false
networkName: ""
rootHnsEndpointName: ""
sourceVip: ""
kubeconfig.conf:
----
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://k8smaster.example.net:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
BinaryData
====
Events: <none>
If you want to check the contents of the configMap in kube-proxy you need to exec into the pod and extract file information.
➜ ~ kubectl exec kube-proxy-dmw6q -n kube-system -it -- /bin/cat /var/lib/kube-proxy/kubeconfig.conf
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://k8smaster.example.net:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
Upvotes: 1
Reputation: 2266
Absolutely @confused genius, the kube-proxy
process and its config files reside inside the kube-proxy
pod.
[centos@hp-gpu-node2 ~]$ ps -ef | grep proxy
root 807 1 0 Oct20 ? 00:00:00 /usr/sbin/gssproxy -D
root 12256 12239 0 Oct20 ? 00:18:42 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=hp-gpu-node2
centos 15338 10073 0 22:01 pts/0 00:00:00 grep --color=auto proxy
[centos@hp-gpu-node2 ~]$ ps -ef | grep 12239
root 12239 4681 0 Oct20 ? 00:00:37 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/415bc97a940caf1493db295d4b794e7313a431c6189d775d6e66a1337e13802f -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 12256 12239 0 Oct20 ? 00:18:42 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=hp-gpu-node2
centos 15452 10073 0 22:01 pts/0 00:00:00 grep --color=auto 12239
[centos@hp-gpu-node2 ~]$
Now list containers and get kube-proxy
container short id :
[centos@hp-gpu-node2 ~]$ sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b4ef8516ffba nvidia/k8s-device-plugin "nvidia-device-plugin" 13 days ago Up 13 days k8s_nvidia-device-plugin-ctr_nvidia-device-plugin-daemonset-1.12-k2jsn_kube-system_5eb15b43-12ce-11eb-a793-566f15970179_0
5361f340e00c gcr.io/google_containers/pause-amd64:3.1 "/pause" 13 days ago Up 13 days k8s_POD_nvidia-device-plugin-daemonset-1.12-k2jsn_kube-system_5eb15b43-12ce-11eb-a793-566f15970179_0
415bc97a940c gcr.io/google-containers/kube-proxy "/usr/local/bin/kube…" 13 days ago Up 13 days k8s_kube-proxy_kube-proxy-4d4hl_kube-system_3ebe3bf6-12cd-11eb-a793-566f15970179_0
762943484b1e gcr.io/google_containers/pause-amd64:3.1 "/pause" 13 days ago Up 13 days k8s_POD_kube-proxy-4d4hl_kube-system_3ebe3bf6-12cd-11eb-a793-566f15970179_0
4bfdabe6597c k8s.gcr.io/k8s-dns-node-cache "/node-cache -locali…" 13 days ago Up 13 days k8s_node-cache_nodelocaldns-hpvpb_kube-system_0c398cf7-12cd-11eb-a793-566f15970179_0
52c0f95f2d4c gcr.io/google_containers/pause-amd64:3.1 "/pause" 13 days ago Up 13 days k8s_POD_nodelocaldns-hpvpb_kube-system_0c398cf7-12cd-11eb-a793-566f15970179_0
a34ec37154a8 calico/node "start_runit" 13 days ago Up 13 days k8s_calico-node_calico-node-6vrn4_kube-system_fb95886a-12cc-11eb-a793-566f15970179_0
09895989f5b7 gcr.io/google_containers/pause-amd64:3.1 "/pause" 13 days ago Up 13 days k8s_POD_calico-node-6vrn4_kube-system_fb95886a-12cc-11eb-a793-566f15970179_0
eee5cc5a8e7a 53f3fd8007f7 "nginx -g 'daemon of…" 13 days ago Up 13 days k8s_nginx-proxy_nginx-proxy-hp-gpu-node2_kube-system_b853c9cd2cc0a3a71070731d4f6cfbca_0
d59e91a314a2 gcr.io/google_containers/pause-amd64:3.1 "/pause" 13 days ago Up 13 days k8s_POD_nginx-proxy-hp-gpu-node2_kube-system_b853c9cd2cc0a3a71070731d4f6cfbca_0
[centos@hp-gpu-node2 ~]$
Check kube-proxy
config file permissions :
[centos@hp-gpu-node2 ~]$ sudo docker exec -t -i 415bc97a940 stat -c %a /var/lib/kube-proxy/config.conf
777
[centos@hp-gpu-node2 ~]$
Upvotes: 2
Reputation: 3244
I am also facing issue on my setup ( 1.19)
[root@project1kubemaster ~]# kubectl version --short
Client Version: v1.19.3
Server Version: v1.19.3
[root@project1kubemaster ~]# ps -ef | grep kube-proxy
root 2103 2046 0 11:30 ? 00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=project1kubemaster
[root@project1kubemaster ~]# ll /var/lib/kube-proxy/config.conf
ls: cannot access /var/lib/kube-proxy/config.conf: No such file or directory
One more interesting thing is "kube-proxy" is also not found :
[root@project1kubemaster ~]# ls /usr/local/bin/kube-proxy
ls: cannot access /usr/local/bin/kube-proxy: No such file or directory
Above made me realize that kube-proxy binary is running inside kubeproxy container of that node
[root@project1kubemaster ~]# kubectl get pods -n kube-system -o wide | grep proxy
kube-proxy-ffbqr 1/1 Running 0 27m <IP> project1kubeworker2 <none> <none>
kube-proxy-r9pz9 1/1 Running 0 29m <IP> project1kubemaster <none> <none>
kube-proxy-zcrtw 1/1 Running 0 27m <IP> project1kubeworker1 <none> <none>
[root@project1kubemaster ~]# kubectl exec -it kube-proxy-r9pz9 -n kube-system -- /bin/sh
#
#
# find / -name config.conf
/var/lib/kube-proxy/..2020_11_02_16_30_32.787002112/config.conf
/var/lib/kube-proxy/config.conf
In short , It seems like kube-proxy binary & config files are inside kube-proxy pod of that node and they are running inside that pod . One reason why it might show up ps -ef output of host can be due to that pod is using Pid Name space of the host . also we can see that parent pid of kube-proxy process is nothing but containerd of that respective container .
[root@project1kubemaster ~]# ps -ef | grep 2046
root 2046 16904 0 11:30 ? 00:00:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/c3e9bf6ecdcdd0f56d0c76711cea4cadd023cd6ef82bf8312311248a7b0501a4 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc
root 2103 2046 0 11:30 ? 00:00:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=project1kubemaster
Upvotes: 5