Vidya
Vidya

Reputation: 657

Kubernetes Unable to connect to the server: dial tcp x.x.x.x:6443: i/o timeout

I am using test kubenetes cluster (Kubeadm 1 master and 2 nodes setup), My public ip change time to time and when my public IP changed, I am unable to connect to cluster and i get below error

 Kubernetes Unable to connect to the server: dial tcp x.x.x.x:6443: i/o timeout

I also have private IP 10.10.10.10 which is consistent all the time.

I have created kubernetes cluster using below command

 kubeadm init --control-plane-endpoint 10.10.10.10

But still it failed because certificates are signed to public IP and below is the error

 The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?

Can someone help to setup kubeadm, and should allow for all IP's something like 0.0.0.0 and I am fine for security view point since it is test setup. or any parament fix.

Upvotes: 2

Views: 6167

Answers (1)

matt_j
matt_j

Reputation: 4614

Since @Vidya has already solved this issue by using a static IP address, I decided to provide a Community Wiki answer just for better visibility to other community members.

First of all, it is not recommended to have a frequently changing master/server IP address.
As we can find in the discussion on GitHub kubernetes/88648 - kubeadm does not provide an easy way to deal with this.

However, there are a few workarounds that can help us, when the IP address on the Kubernetes master node changes. Based on the discussion Changing master IP address, I prepared a script that regenerates certificates and re-init master node.

This script might be helpful, but I recommend running one command at a time (it will be safer).
In addition, you may need to customize some steps to your needs:
NOTE: In the example below, I'm using Docker as the container runtime.

root@kmaster:~# cat reinit_master.sh 
#!/bin/bash
set -e

echo "Stopping kubelet and docker"
systemctl stop kubelet docker

echo "Making backup kubernetes data"
mv /etc/kubernetes /etc/kubernetes-backup
mv /var/lib/kubelet /var/lib/kubelet-backup

echo "Restoring certificates"
mkdir /etc/kubernetes
cp -r /etc/kubernetes-backup/pki /etc/kubernetes/
rm /etc/kubernetes/pki/{apiserver.*,etcd/peer.*}

echo "Starting docker"
systemctl start docker 


echo "Reinitializing master node"
kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd

echo "Updating kubeconfig file"
cp /etc/kubernetes/admin.conf ~/.kube/config

Then you need to rejoin the worker nodes to the cluster.

Upvotes: 2

Related Questions