Reputation: 102
I have a deploy in gke associated with an horizontal pod autoscaler, based on an external metric (pubsub subscription).
For some reason, the autoscaler is creating a ripple (or thrashing) effect on my pods, making them keep being scaled up and down to the same values each minute (as seem on the graph below).
I found out that there is a flag for the kube-controller-manager component that introduces a cooldown time between downscale events (--horizontal-pod-autoscaler-downscale-stabilization
).
However, I can't access the configurations for the kube-controller-manager in GKE. Is there any workaround for it? And if it's impossible to configure it in GKE, is there another way to mitigate this effect?
Upvotes: 2
Views: 557
Reputation: 4899
GKE clusters are fully managed by Google which means the control plane (the master(s)) is hosted in a Google tenant project and fully managed by the platform. There is no way for you to make any changes to the master or any of the control plane components.
There is no way for you to add the --horizontal-pod-autoscaler-downscale-stabilization
flag on GKE.
However, the end result you are trying to address is either due to an issue with how your HPA is configured (metric and/or metric thershhold) or possibly an issue with how the cluster is ingesting and consuming these metrics which is leading to this constant scale up and down. I strongly recommend reviewing the custom metric you are using to ensure it is a reliable source to base your pod scaling on.
Upvotes: 3