Reputation: 37034
I've enabled both exporters:
chart.yaml:
...
dependencies:
...
- name: kafka
version: 17.1.0
repository: "@bitnami"
condition: kafka.enabled
...
values.yaml:
kafka:
metrics:
kafka:
enabled: true
jmx:
enabled: true
serviceMonitor:
enabled: true
labels:
my.custom.label/service-monitor: "1.0"
I see that both of them are appeared on status->Service Discovery page of Prometheus so I consider them working.
Next step: I add following grafana dashboard: https://grafana.com/grafana/dashboards/12483
But most of metrics(not all) are not available on that dashboard.
For example one of the panel uses jvm_memory_bytes_used
metric but I don't see this metric on prometheus side
How can I fix it ?
Upvotes: 0
Views: 1293
Reputation: 191710
CPU and memory load would need to come from a different exporter (e.g. node_exporter
) or for related pod/container metrics within k8s, then cAdvisor
.
For example one of the panel uses
jvm_memory_bytes_used
metric but I don't see this metric on prometheus side
Unclear what Datasource your Grafana dashboard is using. If it is Prometheus, then you will want to first ensure you are querying within the appropriate time range. But only after you manually grep the metrics endpoint that is being scraped to ensure your JMX exporter is actually reporting that data.
Upvotes: 1