ctavan
ctavan

Reputation: 472

Monitoring and alerting on pod status or restart with Google Container Engine (GKE) and Stackdriver

Is there a way to monitor the pod status and restart count of pods running in a GKE cluster with Stackdriver?

While I can see CPU, memory and disk usage metrics for all pods in Stackdriver there seems to be no way of getting metrics about crashing pods or pods in a replica set being restarted due to crashes.

I'm using a Kubernetes replica set to manage the pods, hence they are respawned and created with a new name when they crash. As far as I can tell the metrics in Stackdriver appear by pod-name (which is unique for the lifetime of the pod) which doesn't sound really sensible.

Alerting upon pod failures sounds like such a natural thing that it sounds hard to believe that this is not supported at the moment. The monitoring and alerting capabilities that I get from Stackdriver for Google Container Engine as they stand seem to be rather useless as they are all bound to pods whose lifetime can be very short.

So if this doesn't work out of the box are there known workarounds or best practices on how to monitor for continuously crashing pods?

Upvotes: 24

Views: 16133

Answers (5)

Natan Yellin
Natan Yellin

Reputation: 6387

Others have commented on how to do this with metrics, which is the right solution if you have a very large number of crashing pods.

An alernative approach is to treat crashing pods as discrete events or even log-lines. You can do this with Robusta (disclaimer, I wrote this) with YAML like this:

triggers:
  - on_pod_update: {}
actions:
  - restart_loop_reporter:
      restart_reason: CrashLoopBackOff
  - image_pull_backoff_reporter:
      rate_limit: 3600
sinks:
  - slack

Here we're triggering an action named restart_loop_reporter whenever a pod updates. The data stream comes from the APIServer.

The restart_loop_reporter is an action which filters out non-crashing pods. Above it's configured to report only on CrashLoopBackOffs but you could remove that to report all crashes.

A benefit of doing it this way is that you can gather extra data about the crash automatically. For example, the above will fetch the pod's logs and forward them along with the crash report.

I'm sending the result here to Slack, but you could just as well send it to a structured output like Kafka (already builtin) or Stackdriver (not yet supported, but I can fix that if you like).

Upvotes: 0

dan carter
dan carter

Reputation: 4361

There is a built in metric now, so it's easy to dashboard and/or alert on it without setting up custom metrics

Metric: kubernetes.io/container/restart_count
Resource type: k8s_container

Upvotes: 7

Jonathan Lin
Jonathan Lin

Reputation: 20724

You can achieve this manually with the following:

  1. In Logs Viewer, creating the following filter:

    resource.labels.project_id="<PROJECT_ID>"
    resource.labels.cluster_name="<CLUSTER_NAME>"
    resource.labels.namespace_name="<NAMESPACE, or default>"
    jsonPayload.message:"failed liveness probe"
    
  2. Create a metric by clicking on the Create Metric button above the filter input and filling up the details.

  3. You may now track this metric in Stackdriver.

Would be happy to be informed of a built-in metric instead of this.

Upvotes: 6

grimmjow_sms
grimmjow_sms

Reputation: 350

Remember that, you can always raise feature request if the options available are not enough.

Upvotes: -1

WizardCXY
WizardCXY

Reputation: 51

In my cluster (a bare-metal k8s cluster),I use kube-state-metrics https://github.com/kubernetes/kube-state-metrics to do what you want. This project belongs to kubernetes repo and it is quite easy to use. Once deployed u can use kube_pod_container_status_restarts this metrics to know if a container restarts

Upvotes: 5

Related Questions