Reputation: 1116
I have configured docker compose for open telemetry collector, prometheus and jaeger and send data via otel agent. Jaeger is working fine but prometheus is not showing any metrics despite collector receiving metrics data.
Following is my configuration:
docker-compose.yml:
# docker-compose.yml file
version: "3.5"
services:
jaeger:
container_name: jaeger
hostname: jaeger
networks:
- backend
image: jaegertracing/all-in-one:latest
volumes:
- "./jaeger-ui.json:/etc/jaeger/jaeger-ui.json"
command: --query.ui-config /etc/jaeger/jaeger-ui.json
environment:
- METRICS_STORAGE_TYPE=prometheus
- PROMETHEUS_SERVER_URL=http://prometheus:9090
ports:
- "14250:14250"
- "14268:14268"
- "6831:6831/udp"
- "16686:16686"
- "16685:16685"
collector:
container_name: collector
hostname: collector
networks:
- backend
image: otel/opentelemetry-collector-contrib:latest
volumes:
- "./otel-collector-config.yml:/etc/otelcol/otel-collector-config.yml"
command: --config /etc/otelcol/otel-collector-config.yml
ports:
- "5555:5555"
- "6666:6666"
depends_on:
- jaeger
prometheus:
container_name: prometheus
hostname: prometheus
networks:
- backend
image: prom/prometheus:latest
volumes:
- "./prometheus.yml:/etc/prometheus/prometheus.yml"
ports:
- "9090:9090"
networks:
backend:
otel-collector-config.yml:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:5555
processors:
batch:
timeout: 1s
send_batch_size: 1
exporters:
prometheus:
endpoint: "collector:6666"
jaeger:
endpoint: "jaeger:14250" # using the docker-compose name of the jaeger container
tls:
insecure: true
service:
pipelines:
traces:
receivers: [ otlp ]
processors: [ batch ]
exporters: [ jaeger ]
metrics:
receivers: [ otlp ]
processors: [ batch ]
exporters: [ prometheus ]
prometheus.yml:
global:
scrape_interval: 1s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 1s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
scrape_configs:
- job_name: collector
scrape_interval: 1s
static_configs:
- targets: [ 'collector:6666' ] # using the name of the OpenTelemetryCollector container defined in the docker compose file
Following is my tracer.properties config used for otel agent for java:
otel.traces.exporter=otlp,logging
otel.metrics.exporter=otlp
otel.logs.exporter=none
otel.service.name=service1
otel.exporter.otlp.endpoint=http://0.0.0.0:5555
otel.exporter.otlp.protocol=grpc
otel.traces.sampler=always_on
otel.metric.export.interval=1000
I can get trace data in jaeger without any issues:
However metrics is not working:
I am also unable to see any metrics data in prometheus:
What config am I missing for this to work? Also please specify how to optimize this for production.
Upvotes: 0
Views: 2014
Reputation: 995
I've run in the same issue, using the opentelemetry demo as base.
It is a bit misleading and confusing (for people like me, who just started with OTEL).
There are multiple spanmetrics
"plugins" whose produced data is incompatible between them.
calls_total
and latency_bucket
calls
and duration_bucket
The PROCESSOR produces data in prometheus
that is compatible with jaeger
= my "Monitor" tab in Jaeger UI now shows metrics.
The CONNECTOR uses a new format, which is compatible with the example Grafana Dashboards included in the demo folder (thus i don't want to change them).
it should be possible to change the general exporting format of the CONNECTOR, but how that works... I don't know yet
This config is a bit simplified, see comments for the "what" spanmetrics is used where:
receivers:
otlp:
protocols:
grpc:
exporters:
otlp:
endpoint: "jaeger:4317"
tls:
insecure: true
logging:
prometheus:
# this prometheus is used by `spanmetrics` CONNECTOR + PROCESSOR
endpoint: "otelcol:9464"
# these configs apply to the `spanmetrics` CONNECTOR
resource_to_telemetry_conversion:
enabled: true
enable_open_metrics: true # even with `false`, the data wasn't compatible with `jaeger`
processors:
batch:
# deprecated spanmetrics PROCESSOR, used for compatibility with Jaeger Metrics
spanmetrics:
metrics_exporter: prometheus
connectors:
# new spanmetrics CONNECTOR
spanmetrics:
service:
pipelines:
traces:
receivers: [ otlp ]
processors: [ batch, spanmetrics ] # configuring spanmetrics PROCESSOR (deprecated)
exporters: [ otlp, spanmetrics ] # configuring spanmetrics CONNECTOR (not compatible w/ jaeger)
metrics:
receivers: [ redis, otlp, spanmetrics ] # configuring spanmetrics CONNECTOR as source of "metrics" (not compatible w/ jaeger)
processors: [ batch ]
exporters: [ prometheus ]
Upvotes: 2
Reputation: 3942
The Monitor tab in Jaeger requires you to set up the spanmetric processor. This processor will look at spans sent to the OpenTelemetry Collector and if the span.kind
is server
it will create metrics for the duration of the spans, keep it in memory until Prometheus scrapes the metrics endpoint - typically on port 8889
. The Jaeger UI can then collect these metrics from Prometheus.
Without the spanmetrics processor - You will not be able to see any data in Jaeger's Monitor tab.
Look at the service performance monitoring documentation on setting up the Monitor tab as it describes these details.
Upvotes: -1
Reputation: 324
The bind address for the prometheus exporter is "collector:6666"
. This means that the created server will accept requests only on port 6666
and only from host collector
. However, the host of Prometheus is different.
It's better to bind to "any address", e.g. "0.0.0.0:6666"
.
Also, you can use prometheusremotewrite
exporter instead of prometheus
. This way you will be able to see problems in the collector logs.
Upvotes: 1