DmitrySemenov
DmitrySemenov

Reputation: 10375

prometheus scrape ignores "prometheus.io/port: http-metrics" annotation and scrape all ports on the pod

I have a pod with 2 containers inside

➜ k ice port | grep server                
argo-cd-argocd-server-54c4cfd7f7-k68tm                     server                     server       8080  TCP    -
argo-cd-argocd-server-54c4cfd7f7-k68tm                     server                     metrics      8083  TCP    -

I have the following service that I want prometheus to scrape by http-metrics port.

➜ kgsvc argo-cd-argocd-server-metrics -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: http-metrics
    prometheus.io/scrape: "true"
  labels:
    app.kubernetes.io/component: server
    app.kubernetes.io/instance: argo-cd
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: argocd-server-metrics
    app.kubernetes.io/part-of: argocd
    argocd.argoproj.io/instance: argo-cd
    helm.sh/chart: argo-cd-5.13.1
  name: argo-cd-argocd-server-metrics
  namespace: argo-cd
spec:
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http-metrics
    port: 8083
    protocol: TCP
    targetPort: metrics
  selector:
    app.kubernetes.io/instance: argo-cd
    app.kubernetes.io/name: argocd-server
  sessionAffinity: None
  type: ClusterIP

Prometheus then is trying to scrapte both ports on the pod - 8083 (metrics, correct) and 8080 (web app, incorrect)

enter image description here

At the same time if I change the port

prometheus.io/port: 8083

then everything works as expected.

I see that prometheus has the following portion of the configuration

- honor_labels: true
  job_name: kubernetes-service-endpoints
  kubernetes_sd_configs:
  - role: endpoints
  relabel_configs:
  - action: keep
    regex: true
    source_labels:
    - __meta_kubernetes_service_annotation_prometheus_io_scrape
  - action: drop
    regex: true
    source_labels:
    - __meta_kubernetes_service_annotation_prometheus_io_scrape_slow
  - action: replace
    regex: (https?)
    source_labels:
    - __meta_kubernetes_service_annotation_prometheus_io_scheme
    target_label: __scheme__
  - action: replace
    regex: (.+)
    source_labels:
    - __meta_kubernetes_service_annotation_prometheus_io_path
    target_label: __metrics_path__
  - action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    source_labels:
    - __address__
    - __meta_kubernetes_service_annotation_prometheus_io_port
    target_label: __address__

with the following regexp regex: ([^:]+)(?::\d+)?;(\d+)

which does not accept the "alphanumeric" values.

is this a root of the issue?

Upvotes: 1

Views: 1318

Answers (1)

Isaiah4110
Isaiah4110

Reputation: 10120

As far as I know the "prometheus.io/port:" annotation is supposed to take a port number directly and not string like "http-metrics" as you have mentioned above. Even the prometheus configuration/regex is supposed to only read the digits and not any string. And this answers why its working as expected when you change your config to "prometheus.io/port: 8083".

prometheus.io/path: Optional, defaults to /metrics.

prometheus.io/port: Optional, default is %%port%%, a template variable that is replaced by the container/service port.

The default port is taking effect and thats why both your 8080 and 8083 is getting scraped I guess.

Reference: https://docs.datadoghq.com/containers/kubernetes/prometheus/?tab=kubernetesadv2

Upvotes: 1

Related Questions