Reputation: 1850
when i run this command mentioned below kubectl get po -n kube-system
I get this error ::
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Upvotes: 1
Views: 1478
Reputation: 1
I have one K8S server works well and didn't make any management or update changes to it.
One day it suddenly goes down after one reboot. The error looks like this when you issue any kubectl command:
The connection to the server xx.xx.xx.xx:6443 was refused - did you
specify the right host or port?
One very possible reason is the certificate expiration.
Check K8S certs:
admin@hostname:~$ sudo -i
[sudo] password for admin:
root@hostname:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configuration
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jun 09, 2022 07:52 UTC 135d no
apiserver Jun 09, 2022 07:52 UTC 135d ca no
apiserver-etcd-client Jun 09, 2022 07:52 UTC 135d etcd-ca no
apiserver-kubelet-client Jun 09, 2022 07:52 UTC 135d ca no
controller-manager.conf Jun 09, 2022 07:52 UTC 135d no
etcd-healthcheck-client Dec 31, 2021 04:11 UTC **<invalid>** etcd-ca no
etcd-peer Dec 31, 2021 04:11 UTC **<invalid>** etcd-ca no
etcd-server Dec 31, 2021 04:11 UTC **<invalid>** etcd-ca no
front-proxy-client Jun 09, 2022 07:52 UTC 135d front-proxy-ca no
scheduler.conf Jun 09, 2022 07:52 UTC 135d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Nov 03, 2030 13:32 UTC 8y no
etcd-ca Nov 03, 2030 13:32 UTC 8y no
front-proxy-ca Nov 03, 2030 13:32 UTC 8y no
And we can see some certs are invalid. Renew them:
root@hostname:~# kubeadm certs renew etcd-healthcheck-client
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration
certificate for liveness probes to healthcheck etcd renewed
root@hostname:~# kubeadm certs renew etcd-peer
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration
certificate for etcd nodes to communicate with each other renewed
root@hostname:~# kubeadm certs renew etcd-server
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration
certificate for serving etcd renewed
Upvotes: 0
Reputation: 505
Hey I resolved my error of:
Try using this GitHub thread and read the entire comment and do exactly as it says:
https://github.com/Hawaiideveloper/Infastructure-as-Code-Sample_Env/issues/15#issuecomment-811377749
It turned out to be a combination of commands and references to incompatible docker versions, as well as some minor things that Kubernetes documentation 03-31-2021 forgot to mention.
Upvotes: 0
Reputation: 27170
localhost:8080
is the default server to connect to if there is no kubeconfig
present on your system (for the current user).
Follow the instructions on the page linked. You will need to execute something like:
gcloud container clusters get-credentials [CLUSTER_NAME]
Upvotes: 4