Nitish Goel
Nitish Goel

Reputation: 69

Connection refused error on worker node in kubernetes

I'm setting up a 2 node cluster in kubernetes. 1 master node and 1 slave node. After setting up master node I did installation steps of docker, kubeadm,kubelet, kubectl on worker node and then ran the join command. On master node, I see 2 nodes in Ready state (master and worker) but when I try to run any kubectl command on worker node, I'm getting connection refused error as below. I do not see any admin.conf and nothing set in .kube/config . Are these files also needed to be on worker node? and if so how do I get it? How to resolve below error? Appreciate your help

root@kubework:/etc/kubernetes# kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?

root@kubework:/etc/kubernetes# kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. The connection to the server localhost:8080 was refused - did you specify the right host or port? root@kubework:/etc/kubernetes#

Upvotes: 2

Views: 8963

Answers (7)

Amit Verma
Amit Verma

Reputation: 1

I came across the same issue - I found that my 3 VMs are sharing the same IP (since I was using NAT network on Virtual-box), therefore I switched to Bridge network and have 3 different IPs for 3 different VMs and then followed the installation guide for successful installation of k8s cluster.

Amit

Upvotes: 0

Vikram
Vikram

Reputation: 643

If I try the following commands,

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

getting this error

[vikram@node2 ~]$ kubectl version
Error in configuration:
* unable to read client-cert /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied
* unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem for default-auth due to open /var/lib/kubelet/pki/kubelet-client-current.pem: permission denied

Then this works which is really a workaround and not a fix.

sudo kubectl --kubeconfig /etc/kubernetes/kubelet.conf version

Was able to fix it by copying the kubelet-client-current.pem from /var/lib/kubelet/pki/ to a location inside $HOME and modifying to reflect the path of the certs in HOME/.kube/config file. Is this normal?

Upvotes: 2

Apurwa Singh
Apurwa Singh

Reputation: 1

Davidxxx's solution worked for me.

In my case, I found out that there is a file that exists in the worker nodes at the following path:

/etc/kubernetes/kubelet.conf

You can copy this to ~/.kube/config and it works as well. I tested it myself.

Upvotes: 0

Eappan Benjamin
Eappan Benjamin

Reputation: 1

I tried many of the solutions which just copy the /etc/kubernetes/admin.conf to ~/.kube/config. But none worked for me.

My OS is ubuntu and is resolved by removing, purging and re-installing the following :

  1. sudo dpkg -r kubeadm kubectl
  2. sudo dpkg -P kubeadm kubectl
  3. sudo apt-get install -y kubelet kubeadm kubectl
  4. sudo apt-mark hold kubelet kubeadm kubectl
  5. curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" (downloading kubectl again, this actually worked)
  6. kubectl get nodes

NAME STATUS ROLES AGE VERSION

mymaster Ready control-plane,master 31h v1.20.4

myworker Ready 31h v1.20.4

Upvotes: 0

Mark
Mark

Reputation: 4067

This is expected behavior even using kubectl on master node as non root account, by default this config file is stored for root account in /etc/kubernetes/admin.conf:

To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Alternatively on the master, if you are the root user, you can run:
    export KUBECONFIG=/etc/kubernetes/admin.conf

Optionally Controlling your cluster from machines other than the control-plane node

scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

Note:

The KUBECONFIG environment variable holds a list of kubeconfig files. For Linux and Mac, the list is colon-delimited. For Windows, the list is semicolon-delimited. The KUBECONFIG environment variable is not required. If the KUBECONFIG environment variable doesn't exist, kubectl uses the default kubeconfig file, $HOME/.kube/config.

Upvotes: 1

davidxxx
davidxxx

Reputation: 131326

root@kubework:/etc/kubernetes# kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?

Kubcetl is by default configured and working on the master. It requires a kube-apiserver pod and ~/.kube/config.

For worker nodes, we don’t need to use kube-apiserver but what we want is using the master configuration to pass by it. To achieve it we have to copy the ~/.kube/config file from the master to the ~/.kube/config on the worker. Value ~ with the user executing kubcetl on the worker and master (that may be different of course).
Once that done you could use the kubectl command from the worker node exactly as you do that from the master node.

Upvotes: 8

Dashrath Mundkar
Dashrath Mundkar

Reputation: 9174

Yes these files needed. Move these files into respective .kube/config folder on worker nodes.

Upvotes: 0

Related Questions