rj93
rj93

Reputation: 573

Google Kubernetes Cluster not autoscaling down

I have a GKE cluster with autoscaling enabled, and a single node pool. This node pool has a minimum of 1 node, and maximum of 5. When I have been testing the autoscaling of this cluster it has correctly scaled up (added a new node) when I added more replicas to my deployment. When I removed my deployment I would have expected it to scale down, but looking at the logs it is failing because it cannot evict the kube-dns deployment from the node:

reason: {
 messageId: "no.scale.down.node.pod.kube.system.unmovable"        
 parameters: [
  0: "kube-dns-7c976ddbdb-brpfq"         
 ]
}

kube-dns isn't being run as a daemonset, but I do not have any control over that as this is a managed cluster.

I am using Kubernetes 1.16.13-gke.1.

How can I make the cluster node pool scale down?

Upvotes: 4

Views: 5114

Answers (2)

Félix Cantournet
Félix Cantournet

Reputation: 1991

The autoscaler will not evict pods from the kube-system namespace unless they are a daemonset OR they have a PodDisruptionBudget.

For kube-dns, as well as kube-dns-autoscaler, and a few other GKE managed deployment in kube-dns, you need to add a poddisruptionbudget.

e.g:

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  annotations:
  labels:
    k8s-app: kube-dns
  name: kube-dns-bbc
  namespace: kube-system
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns

Upvotes: 10

rj93
rj93

Reputation: 573

I found this github issue, where it specifies that you need to add a taint to the node pool. I have done this and then the node pool is auto scaled down to zero.

Documentation can be found here.

Upvotes: 1

Related Questions