Reputation: 41
I used kubeadm to setup a HA clusters (3 masters) of Stacked control plane and etcd nodes;But when I use kubeadm reset to destroy one master, cann't join a master to ha clusters anymore:
step1:
docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etcd/kubernetes/pki/etcd/ca.crt --endpoints https://xxx.xxx.xxx.xxx:2379 member remove xxxxxxx
to remove the bad etcd;
step2:
docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etcd/kubernetes/pki/etcd/ca.crt --endpoints https://xxx.xxx.xxx.xxx:2379 cluster-health
……
……
cluster is healthy
step3:
kubeadm get cs
……
……
etcd-0 Healthy {"health":"true"}
step4:
kubeadm join the new master to the ha cluster,but get wrong:
etcd cluster is not healthy: context deadline exceeded
Any one can help me to solve this problem
Upvotes: 0
Views: 805
Reputation: 41
$kubectl -n kube-system edit cm kubeadm-config then remove the bad node information below apiEndpoints, eg: remove these three line of below master1-k8s: advertiseAddress: 172.16.12.216 bindPort: 6443
finally you can use "kubeadm join" join the control-plane to HA clusters successfully!!!
Upvotes: 2