Nakshatra
Nakshatra

Reputation: 703

x509 certificate signed by unknown authority- Kubernetes

I am configuring a Kubernetes cluster with 2 nodes in CoreOS as described in https://coreos.com/kubernetes/docs/latest/getting-started.html without flannel. Both servers are in the same network.

But I am getting: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca") while running kubelet in worker.

I configured the TLS certificates properly on both the servers as discussed in the doc.

The master node is working fine. And the kubectl is able to fire containers and pods in master.

Question 1: How to fix this problem?

Question 2: Is there any way to configure a cluster without TLS certificates?

Coreos version:
VERSION=899.15.0
VERSION_ID=899.15.0
BUILD_ID=2016-04-05-1035
PRETTY_NAME="CoreOS 899.15.0"

Etcd conf:

 $ etcdctl member list          
ce2a822cea30bfca: name=78c2c701d4364a8197d3f6ecd04a1d8f peerURLs=http://localhost:2380,http://localhost:7001 clientURLs=http://172.24.0.67:2379

Master: kubelet.service:

[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
Environment=KUBELET_VERSION=v1.2.2_coreos.0
ExecStart=/opt/bin/kubelet-wrapper \
  --api-servers=http://127.0.0.1:8080 \
  --register-schedulable=false \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=172.24.0.67 \
  --cluster-dns=10.3.0.10 \
  --cluster-domain=cluster.local
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Master: kube-controller.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-controller-manager
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - controller-manager
    - --master=http://127.0.0.1:8080
    - --leader-elect=true 
    - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --root-ca-file=/etc/kubernetes/ssl/ca.pem
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
      initialDelaySeconds: 15
      timeoutSeconds: 1
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=http://127.0.0.1:8080
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=http://172.24.0.67:2379
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/24
    - --secure-port=443
    - --advertise-address=172.24.0.67
    - --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-scheduler.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-scheduler
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - scheduler
    - --master=http://127.0.0.1:8080
    - --leader-elect=true
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10251
      initialDelaySeconds: 15
      timeoutSeconds: 1

Slave: kubelet.service

[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests

Environment=KUBELET_VERSION=v1.2.2_coreos.0 
ExecStart=/opt/bin/kubelet-wrapper \
  --api-servers=https://172.24.0.67:443 \
  --register-node=true \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=172.24.0.63 \
  --cluster-dns=10.3.0.10 \
  --cluster-domain=cluster.local \
  --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
  --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Slave: kube-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=https://172.24.0.67:443
    - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
    - --proxy-mode=iptables
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: /etc/ssl/certs
        name: "ssl-certs"
      - mountPath: /etc/kubernetes/worker-kubeconfig.yaml
        name: "kubeconfig"
        readOnly: true
      - mountPath: /etc/kubernetes/ssl
        name: "etc-kube-ssl"
        readOnly: true
  volumes:
    - name: "ssl-certs"
      hostPath:
        path: "/usr/share/ca-certificates"
    - name: "kubeconfig"
      hostPath:
        path: "/etc/kubernetes/worker-kubeconfig.yaml"
    - name: "etc-kube-ssl"
      hostPath:
        path: "/etc/kubernetes/ssl"

Upvotes: 44

Views: 173650

Answers (12)

AZ_
AZ_

Reputation: 21899

If you are running a Docker Desktop on a Mac Mx machine.

  1. rm -rf $HOME/.kube || true
  2. Reset Kubernetes Cluster Reset Kubernetes Cluster

3. Test it with kubectl version command.

Upvotes: 0

Behdin Talebi
Behdin Talebi

Reputation: 31

I had same issue and i fixed it via kubernetes guid link: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/ first step add unset unset KUBECONFIG file and then export KUBECONFIG=/etc/kubernetes/admin.conf

Upvotes: 0

yasin lachini
yasin lachini

Reputation: 6026

This error often occurs when you are using an old or wrong config from a previous kubernetes installation or setup.

The below commands will remove the old or wrong config and copy the new config to the .kube directory as well as set the correct permissions. You can first make a backup of the old or wrong config if you think you might still need using the command mv $HOME/.kube $HOME/.kube.bak:

rm -rf $HOME/.kube || true    
mkdir -p $HOME/.kube   
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Upvotes: 73

piouson
piouson

Reputation: 4565

tldr: sudo rm -rd /root/.kube

In my case, I faced this error after running below commands:

sudo kubeadm init # success
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes # success
sudo kubeadm token list # ERROR, x509: certificate signed by unknown authority
kubeadm token list # success

The Problem

In a previous run, I accidentally created /root/.kube/config with below commands:

sudo -s
kubeadm init # success
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# this worked until I ran `kubeadm init` again and configs diverged

The Solution

Delete incorrect config

sudo kubeadm token list # ERROR, x509: certificate signed by unknown authority
sudo rm -rd /root/.kube
sudo kubeadm token list # SUCCESS

Upvotes: 1

Sandeep Goutam
Sandeep Goutam

Reputation: 154

I followed the below steps and the problem resolved.

  1. Take the backup of original file. cp /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf_bkp

  2. Create a symlink file in the user's home directory inside the ./kube/ ln -s /etc/kubernetes/admin.conf $HOME/.kube/config

Now the original configuration is linked with main admin.conf file which resolved the problem.

Upvotes: -1

Yousef khan
Yousef khan

Reputation: 3204

For anyone like me who is facing same error only in vs code Kubernetes extension.

I reinstalled docker/Kubernetes and didn't update vs code Kubernetes extension

You need to make sure you are using the correct kubeconfig since reinstalling Kubernetes creates a new certificate.

Either use $HOME/.kube/config in setKubeconfig option or just copy it to path where you have set vs code extension to read the config from. Using following command

cp $HOME/.kube/config /{{path-for-kubeconfig}}

Upvotes: 0

I found this error in coredns pods, pod creation failed due to x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca") The issue was for me is that i already installed a k8s cluster before on the same node, and i used the kubeadm reset command to remove the cluster. This command left behind some files in /etc/cni/ that probably caused the issue for me. I deleted this folder and reinstalled the cluster with kubeadm init.

Upvotes: 0

Gabriel Fernandez
Gabriel Fernandez

Reputation: 660

I had the problem persist even after:

mkdir -p $HOME/.kube   
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   
sudo chown $(id -u):$(id -g) $HOME/.kube/config

In that case, restarting kubelet solved the problem:

systemctl restart kubelet

Upvotes: 1

Umar Hayat
Umar Hayat

Reputation: 4991

From kubernetes official site:

  1. Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate

  2. Unset the KUBECONFIG environment variable using:

    unset KUBECONFIG

    Or set it to the default KUBECONFIG location:

    export KUBECONFIG=/etc/kubernetes/admin.conf

  3. Another workaround is to overwrite the existing kubeconfig for the “admin” user:

    mv  $HOME/.kube $HOME/.kube.bak
    mkdir $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

Reference: official site link reference

Upvotes: 32

Mike Chen
Mike Chen

Reputation: 397

The above mentioned regular method does not work. I have tried to use the complete commands for a successful certificate. Please see the commands as follows.

$ sudo kubeadm reset
$ sudo swapoff -a 

$ sudo kubeadm init --pod-network-cidr=10.244.10.0/16 --kubernetes- 
  version "1.18.3"
$ sudo rm -rf $HOME/.kube

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ sudo systemctl enable docker.service
$ sudo service kubelet restart

$ kubectl get nodes

Notes:

If the port refuses to be connected, please add the following command.

$ export KUBECONFIG=$HOME/admin.conf

Upvotes: 1

Abhay Dwivedi
Abhay Dwivedi

Reputation: 2030

Well, to answer your first question I think you have to do a few things to resolve your problem.

First, run the command given in this link : kubernetes.io/docs/setup/independent/create-cluster-kubeadm‌​/…

Complete with those commands :

  • mkdir -p $HOME/.kube
  • sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  • sudo chown $(id -u):$(id -g) $HOME/.kube/config

This admin.conf should be known to kubectl so as to work properly.

Upvotes: 1

JohnBegood
JohnBegood

Reputation: 878

Please see this as a reference and maybe help you resolve your issue by exporting your certs:

kops export kubecfg "your cluster-name"
export KOPS_STATE_STORE=s3://"paste your S3 store"

Hope that will help.

Upvotes: 4

Related Questions