B Randall
B Randall

Reputation: 193

Nothing showing in Grafana for Spring Boot 3 opentelemetry-javaagent - version: 1.32.0 via Open Telemetry Collector

Environment: Spring Boot 3.2.0, JDK 17, Micrometer 1.12.0

Micro Meter

First time user of Open Telemetry.

Auto configuration using opentelemetry-javaagent - version: 1.32.0

Each microservice is an OAuth2 resource and the actuator endpoints are protected. Meaning, I cannot pull metrics from prometheus jobs hitting the actuator endpoints.

Basically, taking the default grpc which defaults to exporting logging, tracing and metrics to localhost:4317

export OTEL_SERVICE_NAME=ingest
export OTEL_RESOURCE_ATTRIBUTES=service.namespace\=osint,deployment.environment\=local
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
export OTEL_EXPORTER_OTLP_PROTOCOL=grpc
export OTEL_LOGS_EXPORTER=logging-otlp
export OTEL_METRICS_EXPORTER=logging-otlp
export OTEL_TRACES_EXPORTER=logging-otlp

Appling the Micrometer Registry/Open Telemetry Registry fix: Bridging OpenTelemetry and Micrometer

    @Bean
    @ConditionalOnClass(name = "io.opentelemetry.javaagent.OpenTelemetryAgent")
    public MeterRegistry otelRegistry() {
        Optional<MeterRegistry> otelRegistry = Metrics.globalRegistry.getRegistries().stream()
            .filter(r -> r.getClass().getName().contains("OpenTelemetryMeterRegistry"))
            .findAny();
        otelRegistry.ifPresent(Metrics.globalRegistry::remove);
        return otelRegistry.orElse(null);
    }

I am getting logs from OtlpJsonLoggingSpanExporter and OtlpJsonLoggingLogRecordExporter and OtlpJsonLoggingMetricExporter with the json payload.

The problem seems to be in the OTEL Collector configuration and I can't seem to get any logging diagnostics to work. I am sure it is a configuration thing.

I am patterning it off Spring Boot using OpenTelemetry, Prometheus, Grafana, Tempo, and Loki which are the backends we intend to use:

docker-compose

version: '3'


services:
  loki:
    container_name: loki
    image: grafana/loki:latest
    command: [ "-config.file=/etc/loki/local-config.yaml" ]
    ports:
      - "3100:3100"

  prometheus:
    container_name: prometheus
    image: prom/prometheus:latest
    volumes:
      - ${OSI_DOCKER_ROOT}\prometheus\prometheus.yml:/etc/prometheus/prometheus.yml
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
    ports:
      - "9090:9090"

  tempo:
    container_name: tempo
    image: grafana/tempo:latest
    command: [ "-config.file=/etc/tempo.yml" ]
    volumes:
      - ${OSI_DOCKER_ROOT}\tempo\tempo.yml:/etc/tempo.yml:ro
      - ${OSI_DOCKER_ROOT}\tempo\tempo-data:/tmp/tempo
    ports:
      - "3200:3200"  # Tempo see tempo.yml http_listen_port
      - "4317"  # otlp grpc

  otel-collector:
    container_name: opentelemetry-collector
    image: otel/opentelemetry-collector-contrib:0.82.0
    restart: always
    command:
      - "--config=/etc/otel/config.yml"
    volumes:
      - ${OSI_DOCKER_ROOT}\collector\otel-collector-config.yml:/etc/otel/config.yml
    ports:
      - 1888:1888 # pprof extension
      - 8889:8889 # Prometheus metrics exposed by the Collector
      - 8890:8890 # Prometheus exporter metrics
      - 13133:13133 # health_check extension
      - 4317:4317 # OTLP gRPC receiver
      - 4318:4318 # OTLP http receiver
      - 55679:55679 # zpages extension
    depends_on:
      - prometheus
      - tempo
      - loki


  grafana:
    container_name: grafana
    image: grafana/grafana:latest
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true
    volumes:
      - ./grafana:/etc/grafana/provisioning/datasources:ro
    ports:
      - 3000:3000
    depends_on:
      - prometheus
      - tempo
      - loki
      - otel-collector

collector:

extensions:
  health_check:
  pprof:
  zpages:

############ Receivers from application
receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:4318
      grpc:
        endpoint: 0.0.0.0:4317

processors:
  # batch metrics before sending to reduce API usage
  batch:
    send_batch_max_size: 1000
    send_batch_size: 100
    timeout: 10s
  #https://community.grafana.com/t/error-sending-logs-with-loki-resource-labels/87561/2

######### From processor, export to Logging, tracing and/or metrics servers
exporters:
  logging:
    verbosity: detailed
  #Metrics to Prometheus
  prometheus:
    endpoint: "0.0.0.0:8889"
    const_labels:
      label1: osint
  # tracing to tempo
  otlp:
    endpoint: tempo:4317
    tls:
      insecure: true
  # logging to loki
  loki:
    endpoint: "http://loki:3100/loki/api/v1/push"

# The Collector pipeline.
service:
  telemetry:
    logs:
      level: debug
      development: true
      sampling:
        initial: 10
        thereafter: 5
      output_paths:
        - stdout
      error_output_paths:
        - stderr
    metrics:
      level: detailed
  extensions: [health_check, pprof, zpages]
  pipelines:
    # for now we only interested about metrics...
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [prometheus]
    traces:
      receivers: [ otlp ]
      processors: [ batch ]
      exporters: [ otlp ]  # name here should match the exporter name for tempo which is otlp
    logs:
      receivers: [otlp]
      exporters: [loki, logging]

prometheus:

global:
  scrape_interval: 15s
  evaluation_interval: 15s
 
scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['prometheus:9090']  #docker-compose prometheus port

tempo:

server:
  http_listen_port: 3200

distributor:
  receivers:                           # this configuration will listen on all ports and protocols that tempo is capable of.
    zipkin:
    jaeger:                            # the receivers all come from the OpenTelemetry collector.  more configuration information can
      protocols:                       # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
        thrift_http:                   #
        grpc:                          # for a production deployment you should only enable the receivers you need!
        thrift_binary:
        thrift_compact:
    otlp:
      protocols:
        http:
        grpc:
    opencensus:

ingester:
  max_block_duration: 5m               # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally

compactor:
  compaction:
    block_retention: 1h                # overall Tempo trace retention. set for demo purposes

metrics_generator:
  registry:
    external_labels:
      source: tempo
      cluster: docker-compose
  storage:
    path: /tmp/tempo/generator/wal

storage:
  trace:
    backend: local                     # backend configuration to use
    wal:
      path: /tmp/tempo/wal             # where to store the wal locally
    local:
      path: /tmp/tempo/blocks

overrides:
  metrics_generator_processors: [service-graphs, span-metrics] # enables metrics generator

http://localhost:4318/v1/traces

{
  "resource": {
    "attributes": [
      {
        "key": "deployment.environment",
        "value": {
          "stringValue": "local"
        }
      },
      {
        "key": "host.arch",
        "value": {
          "stringValue": "amd64"
        }
      },
      {
        "key": "host.name",
        "value": {
          "stringValue": "BRANDA-BSTAQ-PC"
        }
      },
      {
        "key": "os.description",
        "value": {
          "stringValue": "Windows 10 10.0"
        }
      },
      {
        "key": "os.type",
        "value": {
          "stringValue": "windows"
        }
      },
      {
        "key": "process.command_line",
        "value": {
          "stringValue": "C:\\sfw\\java\\jdk17.0.7_7\\bin\\java.exe -javaagent:./agent/opentelemetry-javaagent.jar -XX:TieredStopAtLevel=1 -Xverify:none -Dspring.profiles.active=local, local-otel -Dspring.output.ansi.enabled=always -Dcom.sun.management.jmxremote -Dspring.jmx.enabled=true -Dspring.liveBeansView.mbeanDomain -Dspring.application.admin.enabled=true -javaagent:C:\\Program Files\\IntelliJ IDEA 2020\\lib\\idea_rt.jar=57806:C:\\Program Files\\IntelliJ IDEA 2020\\bin -Dfile.encoding=UTF-8 com.osi.ingest.IngestApplication"
        }
      },
      {
        "key": "process.executable.path",
        "value": {
          "stringValue": "C:\\sfw\\java\\jdk17.0.7_7\\bin\\java.exe"
        }
      },
      {
        "key": "process.pid",
        "value": {
          "intValue": "4032"
        }
      },
      {
        "key": "process.runtime.description",
        "value": {
          "stringValue": "Amazon.com Inc. OpenJDK 64-Bit Server VM 17.0.7+7-LTS"
        }
      },
      {
        "key": "process.runtime.name",
        "value": {
          "stringValue": "OpenJDK Runtime Environment"
        }
      },
      {
        "key": "process.runtime.version",
        "value": {
          "stringValue": "17.0.7+7-LTS"
        }
      },
      {
        "key": "service.name",
        "value": {
          "stringValue": "ingest"
        }
      },
      {
        "key": "service.namespace",
        "value": {
          "stringValue": "osint"
        }
      },
      {
        "key": "telemetry.auto.version",
        "value": {
          "stringValue": "1.32.0"
        }
      },
      {
        "key": "telemetry.sdk.language",
        "value": {
          "stringValue": "java"
        }
      },
      {
        "key": "telemetry.sdk.name",
        "value": {
          "stringValue": "opentelemetry"
        }
      },
      {
        "key": "telemetry.sdk.version",
        "value": {
          "stringValue": "1.32.0"
        }
      }
    ]
  },
  "scopeSpans": [
    {
      "scope": {
        "name": "io.opentelemetry.java-http-client",
        "version": "1.32.0-alpha",
        "attributes": []
      },
      "spans": [
        {
          "traceId": "449f8711153cf82b2e2ce8d7bbeaca2c",
          "spanId": "62405db1f5d0c262",
          "name": "POST",
          "kind": 3,
          "startTimeUnixNano": "1706304807905937000",
          "endTimeUnixNano": "1706304807995062800",
          "attributes": [
            {
              "key": "thread.id",
              "value": {
                "intValue": "34"
              }
            },
            {
              "key": "net.peer.name",
              "value": {
                "stringValue": "localhost"
              }
            },
            {
              "key": "http.status_code",
              "value": {
                "intValue": "204"
              }
            },
            {
              "key": "net.protocol.version",
              "value": {
                "stringValue": "1.1"
              }
            },
            {
              "key": "thread.name",
              "value": {
                "stringValue": "loki4j-sender-0"
              }
            },
            {
              "key": "http.method",
              "value": {
                "stringValue": "POST"
              }
            },
            {
              "key": "net.peer.port",
              "value": {
                "intValue": "3100"
              }
            },
            {
              "key": "net.protocol.name",
              "value": {
                "stringValue": "http"
              }
            },
            {
              "key": "http.url",
              "value": {
                "stringValue": "http://localhost:3100/loki/api/v1/push"
              }
            }
          ],
          "events": [],
          "links": [],
          "status": {}
        }
      ]
    }
  ],
  "schemaUrl": "https://opentelemetry.io/schemas/1.21.0"
}

Results in: 200 OK

{
    "partialSuccess": {}
}

Same with http://localhost:4318/v1/metrics and http://localhost:4318/v1/logs

Collector logs:

024-01-26 11:32:09 2024-01-26T18:32:09.739Z    info    zapgrpc/zapgrpc.go:178  [transport] [server-transport 0xc000e4cb60] Closing: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/spans HTTP/\""    {"grpc_log": true}
2024-01-26 11:32:09 2024-01-26T18:32:09.739Z    info    zapgrpc/zapgrpc.go:178  [core] [Server #1] grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/spans HTTP/\"" {"grpc_log": true}
2024-01-26 14:36:44 2024-01-26T21:36:44.322Z    info    zapgrpc/zapgrpc.go:178  [transport] [server-transport 0xc000e4d040] Closing: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/logs HTTP/1\""    {"grpc_log": true}
2024-01-26 14:36:44 2024-01-26T21:36:44.322Z    info    zapgrpc/zapgrpc.go:178  [core] [Server #1] grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/logs HTTP/1\"" {"grpc_log": true}
2024-01-26 14:39:03 2024-01-26T21:39:03.649Z    info    zapgrpc/zapgrpc.go:178  [transport] [server-transport 0xc000e4d040] Closing: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/logs HTTP/1\""    {"grpc_log": true}
2024-01-26 14:39:03 2024-01-26T21:39:03.649Z    info    zapgrpc/zapgrpc.go:178  [core] [Server #1] grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/logs HTTP/1\"" {"grpc_log": true}
2024-01-26 14:48:45 2024-01-26T21:48:45.878Z    info    zapgrpc/zapgrpc.go:178  [transport] [server-transport 0xc000e4d040] Closing: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/spans HTTP/\""    {"grpc_log": true}
2024-01-26 14:48:45 2024-01-26T21:48:45.878Z    info    zapgrpc/zapgrpc.go:178  [core] [Server #1] grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/spans HTTP/\"" {"grpc_log": true}
2024-01-26 14:56:21 2024-01-26T21:56:21.866Z    info    zapgrpc/zapgrpc.go:178  [transport] [server-transport 0xc000e4d040] Closing: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/metrics HTT\""    {"grpc_log": true}
2024-01-26 14:56:21 2024-01-26T21:56:21.866Z    info    zapgrpc/zapgrpc.go:178  [core] [Server #1] grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams received bogus greeting from client: \"POST /api/v2/metrics HTT\"" {"grpc_log": true}

Upvotes: 0

Views: 865

Answers (1)

Aashrai Ravooru
Aashrai Ravooru

Reputation: 131

I faced the same issue recently and was able to fix it by moving to the latest version of io.micrometer. The javagent automatically attaches OpenTelemetryMeterRegistry if the micrometer version is above 1.5 Ref

As per my understanding the OTEL javaagent automatically sends the metrics collected by OpenTelemetryMeterRegistry to the OTEL collector. Which should enable your Grafana visualisation.

Upvotes: 0

Related Questions