DhimanHarry
DhimanHarry

Reputation: 71

kubectl get all - command return - Throttling request

Running kubectl get all returns Throttling request errors

How can I debug and fix this issue?

I0223 10:28:04.717522   44883 request.go:655] Throttling request took 1.1688991s, request: GET:https://192.168.64.2:8443/apis/apps/v1?timeout=32s
I0223 10:28:14.913541   44883 request.go:655] Throttling request took 5.79656704s, request: GET:https://192.168.64.2:8443/apis/authorization.k8s.io/v1?timeout=32s
I0223 10:28:24.914386   44883 request.go:655] Throttling request took 7.394979677s, request: GET:https://192.168.64.2:8443/apis/cert-manager.io/v1alpha2?timeout=32s
I0223 10:28:35.513643   44883 request.go:655] Throttling request took 1.196992376s, request: GET:https://192.168.64.2:8443/api/v1?timeout=32s
I0223 10:28:45.516586   44883 request.go:655] Throttling request took 2.79962307s, request: GET:https://192.168.64.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
I0223 10:28:55.716699   44883 request.go:655] Throttling request took 4.600430975s, request: GET:https://192.168.64.2:8443/apis/node.k8s.io/v1beta1?timeout=32s
I0223 10:29:05.717707   44883 request.go:655] Throttling request took 6.196503125s, request: GET:https://192.168.64.2:8443/apis/storage.k8s.io/v1?timeout=32s
I0223 10:29:15.914744   44883 request.go:655] Throttling request took 7.99827047s, request: GET:https://192.168.64.2:8443/apis/acme.cert-manager.io/v1alpha2?timeout=32s

Upvotes: 6

Views: 12632

Answers (4)

chestack
chestack

Reputation: 116

The reason is about kubernetes discovery cache.

background, upstream status... details are here The Kubernetes Discovery Cache: Blessing and Curse

Upvotes: 0

Mohan
Mohan

Reputation: 39

just remove the sudo rm -rf ~/.kube/cache/

it's worked for me. don't spend too much time on this

Upvotes: 3

Drew_Viles
Drew_Viles

Reputation: 317

To diagnose kubectl commands, you select a level of verbosity when running the command. If you run kubectl -v=9 you'll get a load of debug output.

If you look in there you may find the permissions for your cache directory within .kube are invalid.

I0511 09:28:13.431116  260204 cached_discovery.go:87] failed to write cache to /home/$USER/.kube/cache/discovery/CLUSTER_NAME/security.istio.io/v1beta1/serverresources.json due to mkdir /home/$USER/.kube/cache/discovery: permission denied

To resolve this I simply set the permissions to allow the cache data to be written.

chmod 755 -R ~/.kube/cache

This resolved the problem for me - hope it helps others.

Upvotes: 22

Yigit Polat
Yigit Polat

Reputation: 31

According to Red Hat "Due to increasing number of Custom Resource Definitions (CRDs) installed in the RHOCP - Cluster the requests reaching for API discovery were limited by the client code."

I had a lot of CRDs on my OpenShift cluster and I observed this issue. Is it the case on your Kubernetes cluster?

Updating kubectl version worked for me.

Upvotes: 2

Related Questions