Reputation: 5451
Being new to Google Cloud Platform, I have a basic question on the nodes that are part of kubernetes cluster.
With a free-tier access I have launched a kubernetes cluster with 3 nodes.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-cluster-1-default-pool-5ac7520f-097t Ready <none> 20h v1.7.8-gke.0
gke-cluster-1-default-pool-5ac7520f-9lp7 Ready <none> 20h v1.7.8-gke.0
gke-cluster-1-default-pool-5ac7520f-vhs2 Ready <none> 20h v1.7.8-gke.0
While trying to explore the cluster got to know that the nodes which are launched are just pods but not VMs or servers.
$kubectl --namespace=kube-system get pods
NAME READY STATUS RESTARTS AGE
kube-proxy-gke-cluster-1-default-pool-5ac7520f-097t 1/1 Running 0 20h
kube-proxy-gke-cluster-1-default-pool-5ac7520f-9lp7 1/1 Running 0 20h
kube-proxy-gke-cluster-1-default-pool-5ac7520f-vhs2 1/1 Running 0 20h
(NOTE : removed info about other pods for readability)
The confusion here is how does the high availability of the nodes in the cluster is achieved(as the no of instances of the pod is 1/1)?
Upvotes: 0
Views: 545
Reputation: 46
I think you're missing how the nodes are actually setup.
Each of the nodes is a separate Virtual machine in your network. They host your Kubernetes network between them. Kubeproxy is just one aspect of the node, which allows the pods, services and deployments to communicate between the nodes. There are also a bunch of extra deployments runnning on those nodes. I normally ignore all the system namespaces as it just works, but you can view them in the GCP web interface, under Kubernetes Engine, Workloads. Then delete the Is System Object flag.
Kubeproxy is started as a pod on each node, should it fail, Kubernetes would restart it on the node, which will (hopefully) heal the node and enable it to communicate correctly again.
Upvotes: 2