Reputation: 843
I am trying to create alerts in Prometheus on Kubernetes and sending them to a Slack channel. For this i am using the prometheus-community helm-charts (which already includes the alertmanager). As i want to use my own alerts I have also created an values.yml (shown below) strongly inspired from here. If I port forward Prometheus I can see my Alert there going from inactive, to pending to firing, but no message is sent to slack. I am quite confident that my alertmanager configuration is fine (as I have tested it with some prebuild alerts of another chart and they were sent to slack). So my best guess is that I add the alert in the wrong way (in the serverFiles part), but I can not figure out how to do it correctly. Also, the alertmanager logs look pretty normal to me. Does anyone have an idea where my problem comes from?
---
serverFiles:
alerting_rules.yml:
groups:
- name: example
rules:
- alert: HighRequestLatency
expr: sum(rate(container_network_receive_bytes_total{namespace="kube-logging"}[5m]))>20000
for: 1m
labels:
severity: page
annotations:
summary: High request latency
alertmanager:
persistentVolume:
storageClass: default-hdd-retain
## Deploy alertmanager
##
enabled: true
## Service account for Alertmanager to use.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
create: true
name: ""
## Configure pod disruption budgets for Alertmanager
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
## This configuration is immutable once created and will require the PDB to be deleted to be changed
## https://github.com/kubernetes/kubernetes/issues/45398
##
podDisruptionBudget:
enabled: false
minAvailable: 1
maxUnavailable: ""
## Alertmanager configuration directives
## ref: https://prometheus.io/docs/alerting/configuration/#configuration-file
## https://prometheus.io/webtools/alerting/routing-tree-editor/
##
config:
global:
resolve_timeout: 5m
slack_api_url: "I changed this url for the stack overflow question"
route:
group_by: ['job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
#receiver: 'slack'
routes:
- match:
alertname: DeadMansSwitch
receiver: 'null'
- match:
receiver: 'slack'
continue: true
receivers:
- name: 'null'
- name: 'slack'
slack_configs:
- channel: 'alerts'
send_resolved: false
title: '[{{ .Status | toUpper }}{{ if eq .Status "firing" }}:{{ .Alerts.Firing | len }}{{ end }}] Monitoring Event Notification'
text: >-
{{ range .Alerts }}
*Alert:* {{ .Annotations.summary }} - `{{ .Labels.severity }}`
*Description:* {{ .Annotations.description }}
*Graph:* <{{ .GeneratorURL }}|:chart_with_upwards_trend:> *Runbook:* <{{ .Annotations.runbook }}|:spiral_note_pad:>
*Details:*
{{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
{{ end }}
{{ end }}
Upvotes: 4
Views: 5565
Reputation: 37
If you are using kube-prometheus stack helm chart for prometheus & grafana on k8s you can configure the alert manager config section in values yaml as
> config: global:
> resolve_timeout: 5m route:
> group_by: [Alertname]
> group_wait: 30s
> group_interval: 5m
> repeat_interval: 20m
> receiver: 'slack-k8s-admin'
> routes:
> - match:
> alertname: DeadMansSwitch
> receiver: 'null'
> - match:
> receiver: 'slack-k8s-admin'
> continue: true receivers:
> - name: 'null'
> - name: 'slack-k8s-admin'
> slack_configs:
> - api_url: 'Webhook URL of your slack'
> channel: '#channel-name'
then can apply helm chart as helm install kube-prom-stack prometheus-community/kube-prometheus-stack -f values.yaml --namespace monitoring --create-namespace
Upvotes: 0
Reputation: 1
Helm also uses double curly quotes, just like the slack/mattermost receiver configuration.
To fix this you can use the following scheme:
# HELM values:
value_mm:
# title: '{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}'
title: '{{ template "telegram.default.message" . }}'
text: '{{ template "slack.myorg.text" . }}'
# alertmanager-configmap.yaml (Victoria Metrics Alert)
- name: mattermost
slack_configs:
- send_resolved: true
api_url: {{ .Values.mm.url }}
channel: "#alerts-channel"
title: {{ .Values.value_mm.title | squote }}
text: {{ .Values.value_mm.text | squote }}
Upvotes: 0
Reputation: 843
So I have finally solved the problem. The problem apparently was that the kube-prometheus-stack and the prometheus helm charts work a bit differently. So instead of alertmanager.config I had to insert the code (everything starting from global) at alertmanagerFiles.alertmanager.yml.
Upvotes: 5