everspader
everspader

Reputation: 1710

How to overwrite alertmanager configuration in kube-prometheus-stack helm chart

I am deploying a monitoring stack from the kube-prometheus-stack helm chart and I am trying to configure alertmanager so that it has my custom configuration for alerting in a Slack channel.

The configuration in the pod is loaded from /etc/alertmanager/config/alertmanager.yaml. From the pod description, this file is loaded from a secret automatically generated:

...
  volumeMounts:
   - mountPath: /etc/alertmanager/config
     name: config-volume
...
volumes:
  - name: config-volume
    secret:
      defaultMode: 420
      secretName: alertmanager-prometheus-community-kube-alertmanager-generated

If I inspect the secret, it contains the default configuration found in the default values in alertmanager.config, which I intend to overwrite.

If I pass the following configuration to alertmanager to a fresh installation of the chart, it does not create the alertmanager pod:

alertmanager:
  config:
    global:
      resolve_timeout: 5m
    route:
      group_by: ['job', 'alertname', 'priority']
      group_wait: 10s
      group_interval: 1m
      routes:
      - match:
          alertname: Watchdog
        receiver: 'null'
      - receiver: 'slack-notifications'
        continue: true
    receivers:
    - name: 'slack-notifications'
      slack-configs:
      - slack_api_url: <url here>
        title: '{{ .Status }} ({{ .Alerts.Firing | len }}): {{ .GroupLabels.SortedPairs.Values | join " " }}'
        text: '<!channel> {{ .CommonAnnotations.summary }}'
        channel: '#mychannel'

First of all, if I don't pass any configuration in the values.yaml, the alertmanager pod is successfully created.

How can I properly overwrite alertmanager's configuration so it mounts the correct file with my custom configuration into /etc/alertmanger/config/alertmanager.yaml?

Upvotes: 9

Views: 13906

Answers (2)

m-eriksen
m-eriksen

Reputation: 81

The alertmanager requires certain non-default arguments to overwrite the default as it appears it fails in silence. Wrong configuration leads to the pod not applying the configuration (https://github.com/prometheus-community/helm-charts/issues/1998). What worked for me was to carefully configure the alertmanager and add a watchdog child route and the null receiver

route:
  group_by: [ '...' ]
  group_wait: 30s
  group_interval: 10s
  repeat_interval: 10s
  receiver: 'user1'
  routes:
    - match:
        alertname: Watchdog
        receiver: 'null'
receivers:
  - name: 'null'
  - ...
 

Upvotes: 8

chatla harishwar
chatla harishwar

Reputation: 51

Maybe the following steps will solve your problem

1)Create a Config map from the custom alertmanager.yaml file

kubectl create configmap <name_of_the_configmap> --from-file=<path_and_name_of_thefile>

2)Mount the Configmap as a volume to the container.

...
  volumeMounts:
   - mountPath: /etc/alertmanager/config
     name: config-volume
  volumes:
    - name: config-volume
      configMap:
        # Provide the name of the ConfigMap containing the files you want
        # to add to the container
        name: <ConfigMap_Name_Created>

3)Mounting the configmap will override the file inside the container.

Upvotes: 1

Related Questions