user10830520
user10830520

Reputation: 21

Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get,

[xueke@master-01 admin]$ kubectl logs nginx-deployment-76bf4969df-999x8 Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-deployment-76bf4969df-999x8)

[xueke@master-01 admin]$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.0.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

I specified the admin user here How do I need to modify it?

Upvotes: 2

Views: 11368

Answers (3)

Prafull Ladha
Prafull Ladha

Reputation: 13441

The above error means your apiserver doesn't have the credentials (kubelet cert and key) to authenticate the kubelet's log/exec commands and hence the Forbidden error message.

You need to provide --kubelet-client-certificate=<path_to_cert> and --kubelet-client-key=<path_to_key> to your apiserver, this way apiserver authenticate the kubelet with the certficate and key pair.

For more information, have a look at:

https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-authz/

Upvotes: 3

Johann8
Johann8

Reputation: 730

In our case, the error stemmed from Azure services being downgraded because of a bug in DNS resolution, introduced in Ubuntu 18.04. See Azure status and the technical thread. I ran this command to set a fallback DNS address in the nodes:

az vmss list-instances -g <resourcegroup> -n vmss --query "[].id" --output tsv \
  | az vmss run-command invoke --scripts "echo FallbackDNS=168.63.129.16 >> /etc/systemd/resolved.conf; systemctl restart systemd-resolved.service" --command-id RunShellScript --ids @-

Upvotes: 0

suren
suren

Reputation: 8786

That's an RBAC error. The user had no permissions to see logs. If you have a user with cluster-admin permissions you can fix this error with

kubectl create clusterrolebinding the-boss --user system:anonymous --clusterrole cluster-admin

Note: Not a good idea to give an anonymous user cluster-admin role. Will fix the issue though.

Upvotes: -2

Related Questions