Reputation: 361
We had setup kubernetes 1.10.1 on CoreOS with three nodes. Setup is successfull
NAME STATUS ROLES AGE VERSION
node1.example.com Ready master 19h v1.10.1+coreos.0
node2.example.com Ready node 19h v1.10.1+coreos.0
node3.example.com Ready node 19h v1.10.1+coreos.0
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod-nginx2-689b9cdffb-qrpjn 1/1 Running 0 16h
kube-system calico-kube-controllers-568dfff588-zxqjj 1/1 Running 0 18h
kube-system calico-node-2wwcg 2/2 Running 0 18h
kube-system calico-node-78nzn 2/2 Running 0 18h
kube-system calico-node-gbvkn 2/2 Running 0 18h
kube-system calico-policy-controller-6d568cc5f7-fx6bv 1/1 Running 0 18h
kube-system kube-apiserver-x66dh 1/1 Running 4 18h
kube-system kube-controller-manager-787f887b67-q6gts 1/1 Running 0 18h
kube-system kube-dns-79ccb5d8df-b9skr 3/3 Running 0 18h
kube-system kube-proxy-gb2wj 1/1 Running 0 18h
kube-system kube-proxy-qtxgv 1/1 Running 0 18h
kube-system kube-proxy-v7wnf 1/1 Running 0 18h
kube-system kube-scheduler-68d5b648c-54925 1/1 Running 0 18h
kube-system pod-checkpointer-vpvg5 1/1 Running 0 18h
But when i tries to see the logs of any pods kubectl gives the following error:
kubectl logs -f pod-nginx2-689b9cdffb-qrpjn error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))
And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:
kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash error: unable to upgrade connection: Unauthorized
Kubelet Service File :
Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--kubeconfig=/etc/kubernetes/kubeconfig \
--config=/etc/kubernetes/config \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--allow-privileged \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--hostname-override=node1.example.com \
--node-labels=node-role.kubernetes.io/master \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
KubeletConfiguration File
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"
We have also specified "--kubelet-client-certificate" and "--kubelet-client-key" flags into kube-apiserver.yaml files:
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
So what we are missing here? Thanks in advance :)
Upvotes: 33
Views: 161800
Reputation: 1
Renew the certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself:
$ kubeadm certs renew admin.conf
Then, you will get a new kubeconfig file draft has the new client-certificate-data and client-key-data for k8s admin user.
$ cat /etc/kubernetes/admin.conf
You can try to login:
$ kubectl get pods --kubeconfig /etc/kubernetes/admin.conf
Upvotes: 0
Reputation: 676
I have just encountered that issue. After certificates renewal (kubeadm certs renew all) and control plane restart (including kubelet on all master nodes) I had to restart kubelet on all worker nodes.
$ sudo systemctl restart kubelet
Upvotes: 0
Reputation: 1698
In my case, I have noticed this issue on a running cluster which was not touched for long time - the answer is more applicable for searchers on Google as this link is at the top by error experienced in the question.
The issue was expired certificates.
You can check this on Kubernetes master server:
# find /etc/kubernetes/pki/ -type f -name "*.crt" -print | egrep -v 'ca.crt$' | xargs -L 1 -t -i bash -c 'openssl x509 -noout -text -in {}|grep After'
bash -c openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt|grep After
Not After : Jan 19 14:54:15 2022 GMT
bash -c openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver-kubelet-client.crt|grep After
Not After : Nov 13 01:46:12 2021 GMT
bash -c openssl x509 -noout -text -in /etc/kubernetes/pki/front-proxy-client.crt|grep After
Not After : Nov 13 01:46:12 2021 GMT
Upvotes: 5
Reputation: 3593
In my case the problem was that somehow context was changed. Checked it by
kubectl config current-context
and then changed it back to the correct one by
kubectl config use-context docker-desktop
Upvotes: 25
Reputation: 22058
This is a quiet common and general error which is related to authentication problems against the API Server.
I beleive many people search for this title so I'll provide a few directions with examples for different types of cases.
1 ) (General)
Common to all types of deployments - check if credentials were expired.
2 ) (Pods and service accounts)
The authentication is related to one of the pods which is using a service account that has issues like invalid token.
3 ) (IoC or deployment tools)
Running with an IoC tool like Terraform and you failed to pass the certificate correctly like in this case.
4 ) (Cloud or other Sass providers)
A few cases which I encountered with AWS EKS:
4.A) In case you're not the cluster creator - you might have no permissions to access cluster.
When an EKS cluster is created, the user (or role) that creates the cluster is automatically granted with the system:master
permissions in the cluster's RBAC configuration.
Other users or roles that needs the ability to interact with your cluster, need to be added explicitly - Read more in here.
4.B) If you're working on multiple clusters/environments/accounts via the CLI, the current profile that is used needs to be re-authenticated or that there is a mismatch between the cluster that need to be accessed and the values of shell variables like: AWS_DEFAULT_PROFILE
or AWS_DEFAULT_REGION
.
4.C) New credentials (AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
) were created and exported to the terminal which might contain old values of previous session (AWS_SESSION_TOKEN
) and need to be replaced or unset.
Upvotes: 15
Reputation: 67
For me the issue was related to mis-configuration in the ~/.kube/config file , after restoring the configurations using kubectl config view --raw >~/.kube/config it was resolved
Upvotes: 0
Reputation: 166
In my case, I experienced multiple errors while trying to run different kubectl commands like unauthorized, server has asked client to provide credentials, etc. After spending a few hours, I deduced that the sync to my cluster on cloud somehow gets messed up. So I run the following commands to refresh the configuration and it starts to work again:
Unset users:
kubectl config unset users.<full-user-name-as-found-in: kubectl config view>
Remove cluster:
kubectl config delete-cluster <full-cluster-name-as-found-in: kubectl config view>
Remove context:
kubectl config delete-context <full-context-name-as-found-in: kubectl config view>
Default context:
kubectl config use-context contexts
Get fresh cluster config from cloud:
ibmcloud cs cluster config --cluster <cluster-name>
Note: I am using ibmcloud for my cluster so last command could be different in your case
Upvotes: -2
Reputation: 816
In general, many different .kube/config file errors will trigger this error message. In my case it was that I simply specified the wrong cluster name in my config file (and spent MANY hours trying to debug it).
When I specified the wrong cluster name, I received 2 requests for MFA token codes, followed by the error: You must be logged in to the server (the server has asked for the client to provide credentials)
message.
Example:
# kubectl create -f scripts/aws-auth-cm.yaml
Assume Role MFA token code: 123456
Assume Role MFA token code: 123456
could not get token: AccessDenied: MultiFactorAuthentication failed with invalid MFA one time pass code.
Upvotes: 0
Reputation: 2710
Looks like you misconfigured kublet:
You missed the --client-ca-file
flag in your Kubelet Service File
That’s why you can get some general information from master, but can’t get access to nodes.
This flag is responsible for certificate; without this flag, you can not get access to the nodes.
Upvotes: 2