Reputation: 636
My actuator-prometheus metrics are reachable under: localhost:5550/linksky/actuator/prometheus For example, I am seeing metric named "http_server_requests_seconds_count"
I have set up my prometheus with docker-compose.yml:
services:
prometheus:
image: prom/prometheus
ports:
- 9090:9090
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
networks:
monitoring:
aliases:
- prometheus
networks:
monitoring:
and my prometheus.yml
scrape_configs:
- job_name: 'linksky_monitoring'
scrape_interval: 2s
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['host.docker.internal:5550']
When I am starting prometheus, I can retrieve metric named "scrape_duration_seconds" and I see that the scrape-job is correct:
But, when I am asking for "http_server_requests_seconds_count", I get no result. Do I expect something wrong? Why do I have only this metric in prometheus, although the "linksky_monitoring" job seems to be running?
UPDATE and SOLUTION
I need to use a tls-connection, because each request for my spring-boot app has to be with TLS. For this issue i have extracted key and cert from my p12-Certificate and made follow config:
scrape_configs:
- job_name: 'monitoring'
scrape_interval: 2s
metrics_path: '/jReditt/actuator/prometheus'
static_configs:
- targets: ['host.docker.internal:5550']
scheme: https
tls_config:
cert_file: '/etc/prometheus/myApp.cert'
key_file: '/etc/prometheus/myApp.key'
insecure_skip_verify: true
No, it is working fine
Upvotes: 3
Views: 4382
Reputation: 728
Your metrics_path in the prometheus.yml is wrong because it's missing a part of the endpoint. It should be like below (/linksky/actuator/prometheus)
scrape_configs:
- job_name: 'linksky_monitoring'
scrape_interval: 2s
metrics_path: '/linksky/actuator/prometheus'
static_configs:
- targets: ['host.docker.internal:5550']
Upvotes: 2