Vincent Adams
Vincent Adams

Reputation: 81

How do I actually view metrics being reported to Google Cloud Managed Service for Prometheus? (Open telemetry)

I am new to GCP, and I'm following this documentation in order to deploy an OpenTelemetry Collector on a GKE cluster.

If you wish to follow this documentation to assist, note that

As explained here, the sample application emits the example_requests_total counter metric and the example_random_numbers histogram metric (among others) on its metrics port.

My question is; Where do I actually view these metrics on the Cloud Console?? How do I verify that this is working correctly??

I can't seem to find them in the metrics explorer. Or anywhere else.

As a final note, they don't seem to be related, but I'll leave the logs from the related workloads below just in case (These are all the logs).

Thanks!


otel-collector (Collector)

2024-01-30T19:46:00.541Z        info    [email protected]/telemetry.go:86 Setting up own telemetry...
2024-01-30T19:46:00.541Z        info    [email protected]/telemetry.go:159        Serving metrics {"address": ":8888", "level": "Basic"}
2024-01-30T19:46:00.549Z        info    [email protected]/memorylimiter.go:118     Using percentage memory limiter {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "total_memory_mib": 3928, "limit_percentage": 65, "spike_limit_percentage": 20}
2024-01-30T19:46:00.549Z        info    [email protected]/memorylimiter.go:82      Memory limiter configured       {"kind": "processor", "name": "memory_limiter", "pipeline": "metrics", "limit_mib": 2553, "spike_limit_mib": 785, "check_interval": 1}
2024-01-30T19:46:00.550Z        info    [email protected]/service.go:151  Starting otelcol-contrib...     {"Version": "0.92.0", "NumCPU": 2}2024-01-30T19:46:00.550Z        info    extensions/extensions.go:34     Starting extensions...
2024-01-30T19:46:00.551Z        info    internal/resourcedetection.go:125       began detecting resource information    {"kind": "processor", "name": "resourcedetection", "pipeline": "metrics"}
2024-01-30T19:46:00.553Z        info    internal/resourcedetection.go:139       detected resource information   {"kind": "processor", "name": "resourcedetection", "pipeline": "metrics", "resource": {"cloud.account.id":"wv63-victorag","cloud.availability_zone":"us-central1-c","cloud.platform":"gcp_kubernetes_engine","cloud.provider":"gcp","host.id":"8821238740309833574","host.name":"gke-opentelemetry-cluste-default-pool-ba8749ae-69nb","k8s.cluster.name":"opentelemetry-cluster"}}2024-01-30T19:46:00.553Z        info    [email protected]/metrics_receiver.go:231      Scrape job added        {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "jobName": "opentelemetry"}2024-01-30T19:46:00.554Z        info    kubernetes/kubernetes.go:329    Using pod service account via in-cluster config {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "discovery": "kubernetes", "config": "opentelemetry"}
2024-01-30T19:46:00.555Z        info    [email protected]/service.go:177  Everything is ready. Begin running and processing data.2024-01-30T19:46:00.555Z        info    [email protected]/metrics_receiver.go:240      Starting discovery manager      {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}2024-01-30T19:46:00.555Z        info    [email protected]/metrics_receiver.go:282      Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}

prom-example (Example application)

{"level":"info","ts":1706643879.564641,"caller":"build/main.go:103","msg":"Starting HTTP server"}
{"level":"info","ts":1706643879.5656836,"caller":"build/main.go:38","msg":"Started number generator"}

Upvotes: 0

Views: 486

Answers (1)

Vincent Adams
Vincent Adams

Reputation: 81

Solved: I was able to view the matrics on Cloud Monitoring after waiting for a few hours. It seems that they simply took a long time to showin the metrics explorer.

Upvotes: 1

Related Questions