Mike
Mike

Reputation: 4269

Why are IP adresses in GKE for fluentd / kube-proxy/prometheus equal to node addresses

I am running a Kubernetes cluster on GKE and I noticed that in kube-system the IP addresses of pods with

fluentd-gcp-...
kube-proxy-gke-gke-dev-cluster-default-pool-...
prometheus-to-...

are the same as those of the nodes, while other pods such as

event-exporter-v0.3.0-...
stackdriver-metadata-agent-cluster-level-...
fluentd-gcp-scaler-...
heapster-gke-...
kube-dns-...
l7-default-backend-...
metrics-server-v0.3.3-...

e.g.

kube-system   fluentd-gcp-scaler-bfd6cf8dd-58m8j                          1/1     Running   0          23h   10.36.1.6   dev-cluster-default-pool-c8a74531-96j4   <none>           <none>
kube-system   fluentd-gcp-v3.1.1-24n5s                                    2/2     Running   0          24h   10.10.1.5   dev-cluster-default-pool-c8a74531-96j4   <none>           <none>

where the pod IP range is: 10.36.0.0/14

and nodes are on 10.10.1.0/24

have IP addresses in pod address range. What is specific about the first three?

Upvotes: 0

Views: 267

Answers (1)

Arghya Sadhu
Arghya Sadhu

Reputation: 44657

This is because pods such as kube proxy , Fluentd, Prometheus are running in host network directly via hostNetwork: true. You can describe those pods and verify that hostNetwork: true is present.

Now coming to the point why these pods need to run in host network in the first place , kube proxy needs access to host's IP tables, prometheus collects metrics and Fluentd collects logs from the host system.

You can just deploy a sample pod such as nginx with hostNetwork: true and it will get node IP.If you remove hostNetwork: true it will get IP from pod CIDR range.

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
  restartPolicy: Always
  hostNetwork: true

Upvotes: 2

Related Questions