Tim
Tim

Reputation: 2231

How to make HPA scale a deployment based on metrics produced by another deployment

What I am trying to achieve is creating a Horizontal Pod Autoscaler able to scale worker pods according to a custom metric produced by a controller pod.

I already have Prometheus scraping, Prometheus Adapater, Custom Metric Server fully operational and scaling the worker deployment with a custom metric my_controller_metric produced by the worker pods already works.

Now my workerpods don't produce this metric anymore, but the controller does. It seems that the API autoscaling/v1 does not support this feature. I am able to specify the HPA with the autoscaling/v2beta1 API if necessary though.

Here is my spec for this HPA:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: my-worker-hpa
  namespace: work
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: my-worker-deployment
  metrics:
  - type: Object
    object:
      target:
        kind: Deployment
        name: my-controller-deployment
      metricName: my_controller_metric
      targetValue: 1

When the configuration is applied with kubectl apply -f my-worker-hpa.yml I get the message:

horizontalpodautoscaler "my-worker-hpa" configured

Though this message seems to be OK, the HPA does not work. Is this spec malformed?

As I said, the metric is available in the Custom Metric Server with a kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep my_controller_metric.

This is the error message from the HPA:

Type           Status  Reason                 Message
----           ------  ------                 -------
AbleToScale    True    SucceededGetScale      the HPA controller was able to get the target's current scale
ScalingActive  False   FailedGetObjectMetric  the HPA was unable to compute the replica count: unable to get metric my_controller_metric: Deployment on work my-controller-deployment/unable to fetch metrics from custom metrics API: the server could not find the metric my_controller_metric for deployments

Thanks!

Upvotes: 5

Views: 1512

Answers (2)

Fairlyn
Fairlyn

Reputation: 1

I was banging my head on this problem too, and the solution I found was to scale based on namespace metrics you expose via prometheus adapter, rather than pod metrics. I use helm charts, but the idea remains the same:

Prometheus adapter:

prometheus-adapter:
  rules:
    default: false
    custom:
      - seriesQuery: ' foobar'
        resources:
          overrides:
            namespace:
              resource: namespace
        name:
          matches: ^(.*)
          as: "metricName"
        metricsQuery: barfoo

HPA:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: name-hpa
  labels:
      {{- include "raw-ds.labels" $global | nindent 4 }}
    app: {{ $deployName }}
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: {{ $deployName }}
  minReplicas: {{ $.Values.autoscaling.minReplicas }}
  maxReplicas: {{ $.Values.autoscaling.maxReplicas }}
  metrics:
    - type: Object
      object:
        describedObject:
          kind: Namespace
          name: default
          apiVersion: v1
        metric:
          name:  metricName
        target:
          type: Value
          value: 2k

Upvotes: 0

PjoterS
PjoterS

Reputation: 14112

In your case problem is HPA configuration: spec.metrics.object.target should also specify API version. Putting apiVersion: extensions/v1beta1 under spec.metrics.object.target should fix it.

In addition, there is an open issue about better config validation in HPA: https://github.com/kubernetes/kubernetes/issues/60511

Upvotes: 0

Related Questions