Ed Yelisseyev
Ed Yelisseyev

Reputation: 53

Helm Prometheus operator doesn't add new ServiceMonitor endpoints to targets

I'm trying to monitor my app using helm prometheus https://github.com/prometheus-community/helm-charts. I've installed this helm chart successfully.

prometheus-kube-prometheus-operator-5d8dcd5988-bw222   1/1     Running   0          11h
prometheus-kube-state-metrics-5d45f64d67-97vxt         1/1     Running   0          11h
prometheus-prometheus-kube-prometheus-prometheus-0     2/2     Running   0          11h
prometheus-prometheus-node-exporter-gl4cz              1/1     Running   0          11h
prometheus-prometheus-node-exporter-mxrsm              1/1     Running   0          11h
prometheus-prometheus-node-exporter-twvdb              1/1     Running   0          11h

App Service and Deployment created in the same namespace, by these yml configs:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: appservice
  namespace: monitoring
  labels:
    app: appservice
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/path: '/actuator/prometheus'
spec:
  replicas: 1
  selector:
    matchLabels:
      app: appservice
  template:
    metadata:
      labels:
        app: appservice
...
apiVersion: v1
kind: Service
metadata:
  name: appservice
  namespace: monitoring
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/path: '/actuator/prometheus'
spec:
  selector:
    app: appservice
  type: ClusterIP
  ports:
    - name: web
      protocol: TCP
      port: 8080
      targetPort: 8080
    - name: jvm-debug
      protocol: TCP
      port: 5005
      targetPort: 5005

And after app was deployed, I had created ServiceMonitor:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: appservice-servicemonitor
  namespace: monitoring
  labels:
    app: appservice
    release: prometheus-repo
spec:
  selector:
    matchLabels:
      app: appservice # target app service
  namespaceSelector:
    matchNames:
      - monitoring
  endpoints:
  - port: web
    path: '/actuator/prometheus'
    interval: 15s

I expect that after adding this ServiceMonitor, my prometheus instance create new target``` like "http://appservice:8080/actuator/prometheus", but it is not, new endpoints doesn't appears in prometheus UI.

I tried to change helm values by adding additionalServiceMonitors

namespaceOverride: "monitoring"
nodeExporter:
  enabled: true

prometheus:
  enabled: true
  prometheusSpec:
    serviceMonitorSelectorNilUsesHelmValues: false
    serviceMonitorSelector:
      matchLabels:
       release: prometheus-repo
    additionalServiceMonitors:
      namespaceSelector:
        any: true
    replicas: 1
    shards: 1
    storageSpec:
      ...
    securityContext:
      ...
    nodeSelector:
      assignment: monitoring

  nodeSelector:
    assignment: monitoring

prometheusOperator:
  nodeSelector:
    assignment: monitoring
  admissionWebhooks:
    patch:
      securityContext:
        ...
  securityContext:
    ...

global:
  alertmanagerSpec:
    nodeSelector:
      assignment: monitoring

But it didn't help. It is really hard to say what is going wrong, no error logs, all configs applies successfully.

Upvotes: 5

Views: 3739

Answers (3)

Przemek Nowak
Przemek Nowak

Reputation: 7713

Recently I had a case that after upgrade of ArgoCD the default annotation which they are using to to determine which resources are form the app changed.

It's right now app.kubernetes.io/instance which could conflict (override) the 'expected' release name which Helm generate. As a outcome release name could be mixed with ArgoCD app instance name. In this case you could end with annotations values like my-release-name and for example dev-my-release-name (if you ArgoCD app is different that release name defined in the app).

After that most of my service monitors stopped working as service monitor CRD annotations didn't match the service annotations. The solution was to not use app.kubernetes.io/instance annotation to mark the resources managed by that tool.

Due to above I recommend to use always argocd.argoproj.io/instance instead of default one if you have release name set for ArgoCD apps.

https://argo-cd.readthedocs.io/en/stable/faq/#why-is-my-app-out-of-sync-even-after-syncing

Upvotes: 0

Matthias M
Matthias M

Reputation: 14940

You can analyze using the prometheus web interface:

(1) Check if the ServiceMonitor config appears in the prometheus config: http://localhost:9090/config If you can't find your config, I would check, if your config is valid and deployed to the cluster.

(2) Check if prometheus can discover pods via this config: http://localhost:9090/service-discovery

If the service discovery can't find your pods, I would compare all values which are required by the config to the labels provided by your pods.

(3) If the service discovery has selected your services, check the targets page: http://localhost:9090/targets

Here you will see if the prometheus endpoints are healthy and accessible by prometheus.

Upvotes: 0

schrom
schrom

Reputation: 1661

I found this guide very helpful.

Please keep in mind that depending on the prometheus stack you are using labels and names can have different default values (for me, using kube-prometheus-stack, for example the secret name was prometheus-kube-prometheus-stack-prometheus instead of prometheus-k8s).

Essential quotes:

ServiceMonitor references

Has my ServiceMonitor been picked up by Prometheus?

ServiceMonitor objects and the namespace where they belong are selected by the serviceMonitorSelector and serviceMonitorNamespaceSelectorof a Prometheus object. The name of a ServiceMonitor is encoded in the Prometheus configuration, so you can simply grep whether it is present there. The configuration generated by the Prometheus Operator is stored in a Kubernetes Secret, named after the Prometheus object name prefixed with prometheus- and is located in the same namespace as the Prometheus object. For example for a Prometheus object called k8s one can find out if the ServiceMonitor named my-service-monitor has been picked up with:

kubectl -n monitoring get secret prometheus-k8s -ojson | jq -r '.data["prometheus.yaml.gz"]' | base64 -d | gunzip | grep "my-service-monitor

Upvotes: 4

Related Questions