Reputation: 764
I have a GKE cluster running kubernetes 1.16.9. I'm trying to get a monitoring system working using Prometheus, and Grafana.
The dashboard I'm using is the standard "Kubernetes Cluster Monitoring" https://grafana.com/grafana/dashboards/315
When I import it though, I can't see the pod-by-pod CPU/Memory usage, I just see "value":
I have another cluster with an almost identical setup using kubernetes 1.15 and the dashboard works perfectly showing each pod and the usage of each pod.
Why is this the case? I'm fairly new to understanding prometheus/grafana and how all this works together.
What could be causing this issue? The metrics are showing, and kubectl top pod shows up... so I think metrics-server is working well...
Any tips on trying to debug this?
Upvotes: 2
Views: 1767
Reputation: 8481
You are not alone with this problem. The things are like that: pod_name
and container_name
labels coming from the kubelet were deprecated in 1.14 in favor of pod and container, then removed in 1.16
Removed metrics
Removed cadvisor metric labels pod_name and container_name to match instrumentation guidelines. Any Prometheus queries that match pod_name and container_name labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod and container instead. (#80376, @ehashman)
so as per above -
Any Prometheus queries that match pod_name
and container_name
labels (e.g. cadvisor or kubelet probe metrics) must be updated to use pod
and container
instead
Very similar question for your reference - Grafana dashboard not displaying pod name instead pod_name
btw, have you tries this one? https://grafana.com/grafana/dashboards/11143
Upvotes: 3