bherto39
bherto39

Reputation: 1806

kubectl unable to connect to server: x509: certificate signed by unknown authority

i'm getting an error when running kubectl one one machine (windows)

the k8s cluster is running on CentOs 7 kubernetes cluster 1.7 master, worker

Here's my .kube\config

  
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.10.12.7:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:localhost.localdomain
  name: system:node:localhost.localdomain@kubernetes
current-context: system:node:localhost.localdomain@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
  

the cluster is built using kubeadm with the default certificates on the pki directory

kubectl unable to connect to server: x509: certificate signed by unknown authority

Upvotes: 95

Views: 323412

Answers (20)

Hemjal
Hemjal

Reputation: 300

the real problem is with the .kube folder permission. I solved it using the following: First, go to normal user mode then

cd ~/
rm -R .kube/
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Upvotes: 0

Supreeth
Supreeth

Reputation: 11

I had a similar problem with Windows 11, K8s in Docker Desktop

  1. Disabled K8s in Docker Desktop
  2. Deleted .kube folder within C:\Users\<ID>\
  3. Enabled K8s in Docker Desktop

Upvotes: 1

Grimbald
Grimbald

Reputation: 11

For me this situation happened with a new WSL instance of Debian which was integrated with my Rancher Desktop installation.

The resolution for me was to install the apt package of ca-certificates.

sudo apt install ca-certificates

Upvotes: 0

Emil Carpenter
Emil Carpenter

Reputation: 2119

Fix (tested on microk8s)

if running as non-root user:

# Create new config file
microk8s config > ~/.kube/config.new

# Check if the keys differ
diff ~/.kube/config ~/.kube/config.new

# If the keys differ and nothing else is different,
# remove the current config file and rename the new file
rm ~/.kube/config
mv ~/.kube/config.new ~/.kube/config

if running as root:

# Create new config file
microk8s config > /root/.kube/config.new

# The same procedure as for non-root above is needed, with root paths

Symptoms were:

With kubectl <whatever>

E0617 11:17:58.215313    8464 memcache.go:265] couldn't get current server API group list: Get "https://10.0.2.15:16443/api?timeout=32s": tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "10.152.183.1")
...
Unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "10.152.183.1")

With kubectl <whatever> --insecure-skip-tls-verify

E0617 11:07:10.405468  141846 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
...
error: You must be logged in to the server (the server has asked for the client to provide credentials)

Upvotes: 0

Thomas Urban
Thomas Urban

Reputation: 5061

And there is another scenario for this issue:

  1. The cluster is public running on remote servers. However, access to port 6443 is blocked by firewall. Only way in is via SSH with a tunnel forwarding traffic from a client's port 16443 to one of the remote master nodes' 127.0.0.1:6443. (Local port 6443 has not been used here due to Docker for Desktop using it already for its local cluster.)
  2. The kubectl configuration as provided on server via kubectl config view --raw has been transferred to the local client and set up there. Basically, running kubectl on a master node of cluster is working well.
  3. Local client's configuration has been adopted to use https://127.0.0.1:16443 instead of the master node's FQDN like e.g. https://m1.cluster.com:6443.

In this scenario, the issue seems to be related to the fact that whatever certificate the service is offering uses the FQDN or using an IP address with HTTPS causes issues with validating any certificate in general.

Hence, I had to utilize local client's hosts file to individually map the FQDN to 127.0.0.1. After that, the adoption of hostname in step 3 above has to be reverted. Changing the port isn't an issue here and thus can stick to e.g. 16443 as illustrated here.

For those who don't know, in Windows the file is located at C:\Windows\System32\drivers\etc\hosts and it can be edited with notepad run with elevated privileges. Every line maps an IP address into a space-separated list of hostnames it is to be associated with. Thus you would have to add a line like this one there:

127.0.0.1 m1.cluster.com

Upvotes: 0

Andy Joiner
Andy Joiner

Reputation: 6542

In my case, connecting to Azure, this was caused by our security proxy Netskope and was fixed by

kubectl config set-cluster my-cluster --certificate-authority=path\to\Netskope.pem

az aks get-credentials --resource-group my-resource-group --name my-cluster

Upvotes: 1

Melvin
Melvin

Reputation: 61

I removed/commented out this line certificate-authority-data: and it worked.

Upvotes: 6

bherto39
bherto39

Reputation: 1806

Sorry I wasn't able to provide this earlier, I just realized the cause:

So on the master node we're running a kubectl proxy

kubectl proxy --address 0.0.0.0 --accept-hosts '.*'

I stopped this and voila the error was gone.

I'm now able to do

    kubectl get nodes
NAME                    STATUS    AGE       VERSION
centos-k8s2             Ready     3d        v1.7.5
localhost.localdomain   Ready     3d        v1.7.5

I hope this helps those who stumbled upon this scenario.

Upvotes: 9

Mithun Biswas
Mithun Biswas

Reputation: 1833

For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.

Upvotes: 29

xavierc
xavierc

Reputation: 522

I got this because I was not connected to the office's VPN

Upvotes: 1

Thanos
Thanos

Reputation: 1778

This is an old question but in case that also helps someone else here is another possible reason.

Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.

You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.

Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.

Hope this helps.

Upvotes: 1

Tudor
Tudor

Reputation: 2706

So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster

openssl.exe s_client -showcerts -connect IP:PORT

IP:PORT should be what in your config is written after server:

Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.

Now go to .kube\config and instead of

certificate-authority-data: <wrongEncodedPublicKey>`

put

certificate-authority: myCert.crt

(it assumes you put myCert.crt in the same folder as the config file) If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards). I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.

Upvotes: 44

AJRohrer
AJRohrer

Reputation: 1061

For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:

When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.

Upvotes: 6

leo
leo

Reputation: 165

This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network

Upvotes: 9

Diego vDev
Diego vDev

Reputation: 1171

One more solution in case it helps anyone:

My scenario:

  • using Windows 10
  • Kubernetes installed via Docker Desktop ui 2.1.0.1
  • the installer created config file at ~/.kube/config
  • the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
  • using proxy

Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.

Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal

Upvotes: 43

hrene
hrene

Reputation: 91

I my case I resolved this issue copying the kubelet configuration to my home kube config

cat /etc/kubernetes/kubelet.conf > ~/.kube/config

Upvotes: 8

Lukasz Dynowski
Lukasz Dynowski

Reputation: 13570

I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.

$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes

Upvotes: 21

stalin
stalin

Reputation: 367

Run:

gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400

here devops1-218400 is my project name. Replace it with your project name.

Upvotes: 21

Michael Wiley
Michael Wiley

Reputation: 11

On GCP

check: gcloud version

-- localMacOS# gcloud version

Run: --- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'

Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?

ref: .x509 @market place deployments on GCP #Kubernetes

Upvotes: 1

JohnBegood
JohnBegood

Reputation: 878

In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .

Now you should be able to access and see the resources of your cluster.

Upvotes: 0

Related Questions