unsafe_where_true
unsafe_where_true

Reputation: 6300

init a kubernetes cluster with kubeadm but public IP on aws

I am trying to follow this tutorial: https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-18-04

Important difference: I need to run the master on a specific node, and the worker nodes are from different regions on AWS.

So it all went well until I wanted to join the nodes (step 5). The command succeeded but kubectl get nodes still only showed the master node.

I looked at the join command and it contained the master 's private ip address: join 10.1.1.40. I guess that can not work if the workers are in a different region (note: later we probably need to add nodes from different providers even, so if there is no important security threat, it should work via public IPs).

So while kubeadm init pod-network-cidr=10.244.0.0/16 initialized the cluster but with this internal IP, I then tried with kubeadm init --apiserver-advertise-address <Public-IP-Addr> --apiserver-bind-port 16443 --pod-network-cidr=10.244.0.0/16

But then it always hangs, and init does not complete. The kubelet log prints lots of

E0610 19:24:24.188347 1051920 kubelet.go:2267] node "ip-x-x-x-x" not found

where "ip-x-x-x-x" seems to be the master's node hostname on AWS.

Upvotes: 0

Views: 636

Answers (1)

unsafe_where_true
unsafe_where_true

Reputation: 6300

I think what made it work is that I set the master's hostname to its public DNS name, and then used that as --control-plane-endpoint argument..., without --apiserver-advertise-address (but with the --apiserver-bind-port as I need to run it on another port).

Need to have it run longer to confirm but so far looks good.

Upvotes: 1

Related Questions