Ivan Balashov
Ivan Balashov

Reputation: 1953

Collecting app-level metrics from Kubernetes containers

According to Kubernetes Custom Metrics Proposal containers can expose its app-level metrics in Prometheus format to be collected by Heapster.

Could anyone elaborate, if metrics are pulled by Heapster that means after the container terminates metrics for the last interval are lost? Can app push metrics to Heapster instead?

Or, is there a recommended approach to collect metrics from moderately short-lived containers running in Kubernetes?

Upvotes: 3

Views: 2705

Answers (3)

brian-brazil
brian-brazil

Reputation: 34122

Prometheus developer here. If you want to monitor the metrics of applications running on Kubernetes, the approach is to have Prometheus scrape the application directly. Prometheus can auto-discover Kubernetes apps, see http://prometheus.io/docs/operating/configuration/#<kubernetes_sd_config>

There is no point in involving Heapster if you're using Prometheus, as Prometheus can do everything it does more directly.

Upvotes: 0

Ryan Cox
Ryan Cox

Reputation: 5073

You may want to look at handling this by sending your metrics directly from your job to a Prometheus pushgateway. This is the precise use case it was created for:

The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.

Upvotes: 0

Alex Robinson
Alex Robinson

Reputation: 13377

Not to speak for the original author's intent, but I believe that proposal is primarily focused on custom metrics that you want to use for things like scheduling and autoscaling within the cluster, not for general purpose monitoring (for which as you mention, pushing metrics is sometimes critical).

There isn't a single recommended pattern for what to do with custom metrics in general. If your environment has a preferred monitoring stack or vendor, a common approach is to run a second container in each pod (a "sidecar" container) to push relevant metrics about the main container to your monitoring backend.

Upvotes: 1

Related Questions