Torsten Bronger
Torsten Bronger

Reputation: 11116

How to reduce CPU limits of kubernetes system resources?

I'd like to keep the number of cores in my GKE cluster below 3. This becomes much more feasible if the CPU limits of the K8s replication controllers and pods are reduced from 100m to at most 50m. Otherwise, the K8s pods alone take 70% of one core.

I decided against increasing the CPU power of a node. This would be conceptually wrong in my opinion because the CPU limit is defined to be measured in cores. Instead, I did the following:

This is a lot of work and probably fragile. Any further changes in upcoming versions of K8s, or changes in the GKE configuration, may break it.

So, is there a better way?

Upvotes: 28

Views: 6608

Answers (4)

Matheus Portillo
Matheus Portillo

Reputation: 307

As stated by @Tim Hockin, The default configurations of add-ons are appropriate for typical clusters. But can be fine-tuned by changing the resource limit specification.

Before working add-on resizing, remember you can also disable unecessary add-ons for your use. This can vary a little depending on the add-on, its version, the kubernetes version, and by provider. Google has a page covering some options, the same concepts can be used in other providers too

As of the solution to the issue linked in @Tim Hockin answer, the first accepted way to do it is by using addon-resizer. It basically find out the best limits and requirements, patches the Deployment/Pod/DaemonSet and recreates the associated pods to match the new limits, but with less effort than manually all of it.

However, another more robust way to achieve that is by using Vertical Pod Autoscaler as stated by @Tim Smart answer. VPA accomplishes what addon-resizer does but it has many benefits:

  • VPA is a custom resource definition of a addon itself, allowing your code to be much more compact than using addon-resizer.
  • By being a custom resource definition it is also much easier to keep the implementation up to date.
  • some providers (as google) run VPA resources on control-plane processes, instead of deployments on your worker nodes. Making that, even if addon-resizer is simplier, VPA uses none of your resources while addon-resizer would.

An updated template would be:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: <addon-name>-vpa
  namespace: kube-system
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind:       <addon-kind (Deployment/DaemonSet/Pod)>
    name:       <addon-name>
  updatePolicy:
    updateMode: "Auto"

It is important to check the addons being used in your current cluster, as they can vary a lot by providers (AWS, Google, etc) and its kubernetes implementation versions

Make sure you have VPA addon installed in your cluster (most kubernetes services has it as an easy option to check)

Update policy can be Initial (only applies new limits when new pods are created), Recreate (forces pods out of spec to die and applies to new pods), Off (create recommendations but don´t apply), or Auto (currently matches Recreate, can change in the future)

The only differences on @Tim Smart answer example are that the current api version is autoscaling.k8s.io/v1, the current api version of targets is apps/v1, and that newer versions of some providers use FluentBit in place of Fluentd. His answer might be better suited for earlier kubernetes versions

If you are using Google Kubernetes Engine for example currently some of the "heaviest" requirement addons are:

  • fluentbit-gke (DaemonSet)
  • gke-metadata-server (DaemonSet)
  • kube-proxy (DaemonSet)
  • kube-dns (Deployment)
  • stackdriver-metadata-agent-cluster-level (Deployment)

By applying VPAs on it, my addon resource requirements dropped from 1.6 to 0.4.

Upvotes: 5

Tim Smart
Tim Smart

Reputation: 1045

I have found one of the best ways to reduce the system resource requests on a GKE cluster, is to use a vertical autoscaler.

Here are the VPA definitions I have used:

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: kube-dns-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: kube-dns
  updatePolicy:
    updateMode: "Auto"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: heapster-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: heapster-v1.6.0-beta.1
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: metadata-agent-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: DaemonSet
    name: metadata-agent
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: metrics-server-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: metrics-server-v0.3.1
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: fluentd-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: DaemonSet
    name: fluentd-gcp-v3.1.1
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: kube-proxy-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: DaemonSet
    name: kube-proxy
  updatePolicy:
    updateMode: "Initial"

Here is a screenshot of what it does to a kube-dns deployment.

Upvotes: 13

David Dehghan
David Dehghan

Reputation: 24775

By the way just in case you wanted to try this on Google Cloud GCE. If you try to change the CPU limit of the core services like kube-dns you will get an error like this.

spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations

Tried on Kubernetes 1.8.7 and 1.9.4.

So at this time the minimum node you need to deploy is n1-standard-1. Also with that about 8% of your cpu is eaten almost constantly by the Kubernetes itself as soon as you have several pods and helms. even if you are not running any major load. I think there are a lot of polling going on and to make sure the cluster is responsive they keep refreshing some stats.

Upvotes: 1

Tim Hockin
Tim Hockin

Reputation: 3662

Changing the default Namespace's LimitRange spec.limits.defaultRequest.cpu should be a legitimate solution for changing the default for new Pods. Note that LimitRange objects are namespaced, so if you use extra Namespaces you probably want to think about what a sane default is for them.

As you point out, this will not affect existing objects or objects in the kube-system Namespace.

The objects in the kube-system Namespace were mostly sized empirically - based on observed values. Changing those might have detrimental effects, but maybe not if your cluster is very small.

We have an open issue (https://github.com/kubernetes/kubernetes/issues/13048) to adjust the kube-system requests based on total cluster size, but that is not is not implemented yet. We have another open issue (https://github.com/kubernetes/kubernetes/issues/13695) to perhaps use a lower QoS for some kube-system resources, but again - not implemented yet.

Of these, I think that #13048 is the right way to implement what you 're asking for. For now, the answer to "is there a better way" is sadly "no". We chose defaults for medium sized clusters - for very small clusters you probably need to do what you are doing.

Upvotes: 12

Related Questions