YoungWoo
YoungWoo

Reputation: 15

I created a serviceemointer using jsonexporter in Prometheus environment, but the metrics could not be verified. Is there a way to check the metric?

I am a beginner who is using Prometheus and Grapana to monitor the value of REST API. Prometheus, json-exporrter, and grafana both used the Helm chart, Prometheus installed as default values.yaml, and json-exporter installed as custom values.yaml. I checked that the prometheus set the service monitor of json-exporter as a target, but I couldn't check its metrics. How can I check the metrics? Below is the environment , screenshots and code.

environment :

screenshots : https://drive.google.com/drive/folders/1vfjbidNpE2_yXfxdX8oX5eWh4-wAx7Ql?usp=sharing

values.yaml

in custom_jsonexporter_values.yaml 

# Default values for prometheus-json-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

image:
  repository: quay.io/prometheuscommunity/json-exporter
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: []
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: []

podSecurityContext: {}
# fsGroup: 2000

# podLabels:
  # Custom labels for the pod

securityContext: {}
# capabilities:
#   drop:
#   - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000

service:
  type: ClusterIP
  port: 7979
  targetPort: http
  name: http

serviceMonitor:
  ## If true, a ServiceMonitor CRD is created for a prometheus operator
  ## https://github.com/coreos/prometheus-operator
  ##
  enabled: true
  namespace: monitoring
  scheme: http

  # Default values that will be used for all ServiceMonitors created by `targets`
  defaults:
    additionalMetricsRelabels: {}
    interval: 60s
    labels:
        release: prometheus
    scrapeTimeout: 60s

  targets:
    - name : pi2
      url: http://xxx.xxx.xxx.xxx:xxxx
      labels: {}                            # Map of labels for ServiceMonitor. Overrides value set in `defaults`
      interval: 60s                         # Scraping interval. Overrides value set in `defaults`
      scrapeTimeout: 60s                    # Scrape timeout. Overrides value set in `defaults`
      additionalMetricsRelabels: {}         # Map of metric labels and values to add
      
ingress:
  enabled: false
  className: ""
  annotations: []
  # kubernetes.io/ingress.class: nginx
  # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
#   cpu: 100m
#   memory: 128Mi
# requests:
#   cpu: 100m
#   memory: 128Mi

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: []

tolerations: []

affinity: []
configuration:
  config: |
    ---
    modules:
      default:
        metrics:
          - name: used_storage_byte
            path: '{ .used }'
            help: used storage byte
            values:
              used : '{ .used }'
            labels: {}
          - name: free_storage_byte
            path: '{ .free }'
            help: free storage byte
            labels: {}
            values :
              free : '{ .free }'
          - name: total_storage_byte
            path: '{ .total }'
            help: total storage byte
            labels: {}
            values :
              total : '{ .total }'
              
        
prometheusRule:
  enabled: false
  additionalLabels: {}
  namespace: ""
  rules: []

additionalVolumes: []
  # - name: password-file
  #   secret:
  #     secretName: secret-name

additionalVolumeMounts: []
  # - name: password-file
  #   mountPath: "/tmp/mysecret.txt"
  #   subPath: mysecret.txt



Upvotes: 1

Views: 824

Answers (1)

Rick Rackow
Rick Rackow

Reputation: 1833

Firstly you can check the targets page on the Prometheus UI to see if a) your desired target is even defined and b) if the endpoint is reachable and being scraped.

However, you may need to troubleshoot a little if either of the above is not the case:

It is important to understand what is happening. You have deployed a Prometheus Operator to the cluster. If you have used the default values from the helm chart, you also deployed a Prometheus custom resource(CR). This instance is what is telling the Prometheus Operator how to ultimately configure the Prometheus running inside the pod. Certain things are static, like global metric relabeling for example, but most are dynamic, such as picking up new targets to actually scrape. Inside the Prometheus CR you will find options to specify serviceMonitorSelector and serviceMonitorNamespaceSelector (The behaviour is the same also for probes and podmonitors so I'm just going over it once). Assuming you leave the default set like serviceMonitorNamespaceSelector: {}, Prometheus Operator will look for ServiceMonitors in all namespaces on the cluster to which it has access via its serviceAccount. The serviceMonitorSelector field lets you specify a label and value combination that must be present on a serviceMonitor that must be present for it to be picked up. Once a or multiple serviceMonitors are found, that match the criteria in the selectors, Prometheus Operator adjusts the configuration in the actual Prometheus instance(tl;dr version) so you end up with proper scrape targets.

Step 1 for trouble shooting: Do your selectors match the labels and namespace of the serviceMonitors? Actually check those. The default on the prometheus operator helm chart expects a label release: prometheus-operator and in your config, you don't seem to add that to your json-exporter's serviceMonitor.

Step 2: The same behaviour as outline for how serviceMonitors are picked up, is happening in turn inside the serviceMonitor itself, so make sure that your service actually matches what is specced out in the serviceMonitor.

To deep dive further into the options you have and what the fields do, check the API documentation.

Upvotes: 1

Related Questions