Dan
Dan

Reputation: 21

Kubernetes NodePort / Load Balancer / Ingress on a Multi-Master Setup: Is it necessary?

I'm fairly new to this but I'm setting up a multi-master, high availability Kubernetes cluster of at least 3 masters and a variable number of nodes. I'm trying to do this WITHOUT the use of kube-spray or any other tools, in order to learn the true ins-and-outs. I feel I have most of it down except one bit:

My understanding is:

Some points about my cluster:

My question is, do I need a NodePort/LB/Ingress Controller? I'm trying to understand why I would need any of the above. If a master is joined to an existing cluster alongside another master, the pods are distributed between them, right? Isn't that all I need? Please help me to understand as I feel I'm missing a key concept.

Upvotes: 0

Views: 869

Answers (1)

Prafull Ladha
Prafull Ladha

Reputation: 13443

First of all NodePort, LoadBalancer and Ingress has nothing to do with the setting up kubernetes cluster. These three are tools to expose your apps to the outside world so that you can access those apps from outside the kubernetes cluster.

There are two parts here:

  1. Setting up the highly available kubernetes cluster with three masters. I have written a blog on it, how to setup a multi-master kubernetes cluster, it will give you brief idea about how to setup multi master cluster in kubernetes.

https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm

  1. Now once you have your kubernetes cluster ready, you can start deploying your applications on it (pods,services etc.). Those application you deploy might needs to be exposed to outside world, for example a website hosted on your kubernetes cluster and needs to be accessed from internet. Then these NodePort, Loadbalancer or Ingress comes into the picture. The difference between NodePort, LoadBalancer and Ingress and when to use what? is explained very well in this article here.

https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0

Hope this gives you some clarity.

EDIT: This edit is for kubeadm config file for 1.13(see comments)

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "VIRTUAL IP"
controlPlaneEndpoint: "VIRTUAL IP"
etcd:
    external:
        endpoints:
        - https://ETCD_0_IP:2379
        - https://ETCD_1_IP:2379
        - https://ETCD_2_IP:2379
        caFile: /etc/kubernetes/pki/etcd/ca.crt
        certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
        keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key

Upvotes: 2

Related Questions