I am trying configure the "route" of alertmanager, below is my configuration:
route:
group_by: ['instance']
group_wait: 30s
group_interval: 5m
repeat_interval: 7m
receiver: pager
routes:
- match:
severity: critical
receiver: email
- match_re:
severity: ^(warning|critical)$
receiver: support_team
receivers:
- name: 'email'
email_configs:
- to: 'xxxxxx#xx.com'
- name: 'support_team'
email_configs:
- to: 'xxxxxx#xx.com'
- name: 'pager'
email_configs:
- to: 'alert-pager#example.com'
Now the e-mail can only be send to the default receiver "pager", will not further route to the custom one.
You need this line for each route when want alerts to be routed to the other ones.
continue: true
e.g.
route:
group_by: ['instance']
group_wait: 30s
group_interval: 5m
repeat_interval: 7m
receiver: pager
routes:
- match:
severity: critical
receiver: email
continue: true
- match_re:
severity: ^(warning|critical)$
receiver: support_team
continue: true
receivers:
- name: 'email'
email_configs:
- to: 'xxxxxx#xx.com'
- name: 'support_team'
email_configs:
- to: 'xxxxxx#xx.com'
- name: 'pager'
email_configs:
- to: 'alert-pager#example.com'
btw. imho receiver should be at the same level as match in yaml structure.
Related
I got following yaml for configmap for AlertAManger but it is not sending mail. I verified the smpt settings are working on another script
kind: ConfigMap
apiVersion: v1
metadata:
name: alertmanager-config
namespace: monitoring
data:
config.yml: |-
global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'AlertManager#xxx.com'
smtp_auth_username: 'alertmanager#gmail.com'
smtp_auth_password: 'xxxxxxxx'
templates:
- '/etc/alertmanager/*.tmpl'
route:
receiver: alert-emailer
group_by: ['alertname', 'priority']
group_wait: 10s
repeat_interval: 30m
routes:
- receiver: slack_demo
# Send severity=slack alerts to slack.
match:
severity: slack
group_wait: 10s
repeat_interval: 1m
receivers:
- name: alert-emailer
email_configs:
- to: alertmanager#gmail.com
send_resolved: false
from: alertmanager#gmail.com
smarthost: smtp.gmail.com:587
require_tls: false
- name: slack_demo
slack_configs:
- api_url: https://hooks.slack.com/services/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sz4Qk
channel: '#xxxxxxxx'%
any idea why it is not working?
When I enable the alertmanager a secret gets created with name alertmanager-{chartName}-alertmanager. But no pods or statefulset of alertmanager gets created.
When I delete this secret with kubectl delete and upgrade the chart again, then new secrets get created alertmanager-{chartName}-alertmanager , alertmanager-{chartName}-alertmanager-generated. In this case i can see the pods and statefulset of alertmanager. But the -generated secret has default values which are null. This secret alertmanager-{chartName}-alertmanager has updated configuration.
Checked the alertmanager.yml with amtool and it shows valid.
Chart - kube-prometheus-stack-36.2.0
#Configuration in my values.yaml
alertmanager:
enabled: true
global:
resolve_timeout: 5m
smtp_require_tls: false
route:
receiver: 'email'
receivers:
- name: 'null'
- name: 'email'
email_configs:
- to: xyz#gmail.com
from: abc#gmail.com
smarthost: x.x.x.x:25
send_resolved: true
#Configuration from the secret alertmanager-{chartName}-alertmanager
global:
resolve_timeout: 5m
smtp_require_tls: false
inhibit_rules:
- equal:
- namespace
- alertname
source_matchers:
- severity = critical
target_matchers:
- severity =~ warning|info
- equal:
- namespace
- alertname
source_matchers:
- severity = warning
target_matchers:
- severity = info
- equal:
- namespace
source_matchers:
- alertname = InfoInhibitor
target_matchers:
- severity = info
receivers:
- name: "null"
- email_configs:
- from: abc#gmail.com
send_resolved: true
smarthost: x.x.x.x:25
to: xyz#gmail.com
name: email
route:
group_by:
- namespace
group_interval: 5m
group_wait: 30s
receiver: email
repeat_interval: 12h
routes:
- matchers:
- alertname =~ "InfoInhibitor|Watchdog"
receiver: "null"
templates:
- /etc/alertmanager/config/*.tmpl
my alertmanagerconfigs:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: configlinkflowalertmanager
labels:
alertmanagerConfig: linkflowAlertmanager
spec:
route:
groupBy: ['alertname']
groupWait: 30s
groupInterval: 5m
repeatInterval: 12h
receiver: 'webhook'
matchers:
- name: alertname
value: KubePodCrashLooping
- name: namespace
value: linkflow
receivers:
- name: 'webhook'
webhookConfigs:
- url: 'http://xxxxx:1194/'
the web shows: namespace become monitoring ? why? and alerts only in monitoring can send out
can I send other namespace or all namespace alerts?
route:
receiver: Default
group_by:
- namespace
continue: false
routes:
- receiver: monitoring-configlinkflowalertmanager-webhook
group_by:
- namespace
match:
alertname: KubePodCrashLooping
namespace: monitoring
continue: true
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
This is a feature:
That's kind of the point of the feature, otherwise it's possible that alertmanager configs in different namespaces conflict and Alertmanager won't be able to start.
There is an Issue (#3737) to make namespace label matching optional / configurable. The related PR still has to be merged (as of Today), but it will allow you to define global alerts.
I have this alert configuration and expect this behavior.
If destination: bloom and severity: info send to slack-alert-info - it's work
Error there. If destination: bloom and severity: warning|critical send to slack-alert-multi - this works with error.
Sverity: warning sending as expected to both Slack's channels but critical sending only to default channel.
May someone help me understand my error, please?
Amtool gives no error.
amtool config routes test --config.file=/opt/prometheus/etc/alertmanager.yml --tree --verify.receivers=slack-alert-multi severity=warning destination=bloom
Matching routes:
.
└── default-route
└── {destination=~"^(?:bloom)$",severity=~"^(?:warning|critical)$"} receiver: slack-alert-multi
slack-alert-multi
amtool config routes test --config.file=/opt/prometheus/etc/alertmanager.yml --tree --verify.receivers=slack-alert-multi severity=critical destination=bloom
Matching routes:
.
└── default-route
└── {destination=~"^(?:bloom)$",severity=~"^(?:warning|critical)$"} receiver: slack-alert-multi
slack-alert-multi
Alert configuration
...
labels:
alerttype: infrastructure
severity: warning
destination: bloom
...
---
global:
resolve_timeout: 30m
route:
group_by: [ 'alertname', 'cluster', 'severity' ]
group_wait: 30s
group_interval: 30s
repeat_interval: 300s
receiver: 'slack'
routes:
- receiver: 'slack-alert-multi'
match_re:
destination: bloom
severity: warning|critical
- receiver: 'slack-alert-info'
match_re:
destination: bloom
severity: info
receivers:
- name: 'slack-alert-multi'
slack_configs:
- api_url: 'https://hooks.slack.com/services/T0/B0/V2'
channel: '#alert-upload'
send_resolved: true
icon_url: 'https://avatars3.githubusercontent.com/u/3380462'
title: '{{ template "custom_title" . }}'
text: '{{ template "custom_slack_message" . }}'
- api_url: 'https://hooks.slack.com/services/T0/B0/J1'
channel: '#alert-exports'
send_resolved: true
icon_url: 'https://avatars3.githubusercontent.com/u/3380462'
title: '{{ template "custom_title" . }}'
text: '{{ template "custom_slack_message" . }}'
# Default receiver
- name: 'slack'
slack_configs:
- api_url: 'https://hooks.slack.com/services/T0/B0/2x'
channel: '#aws-notification'
send_resolved: true
icon_url: 'https://avatars3.githubusercontent.com/u/3380462'
title: '{{ template "custom_title" . }}'
text: '{{ template "custom_slack_message" . }}'
- name: 'slack-alert-info'
slack_configs:
- api_url: 'https://hooks.slack.com/services/T0/B0/EA'
channel: '#alert-info'
send_resolved: true
icon_url: 'https://avatars3.githubusercontent.com/u/3380462'
title: '{{ template "custom_title" . }}'
text: '{{ template "custom_slack_message" . }}'
templates:
- '/opt/alertmanager_notifications.tmpl'
Try add
continue: true
into
- receiver: 'slack-alert-info'
match_re:
destination: bloom
severity: info
continue: true
I am trying to use "E-mail" to receive alert from Prometheus with alertmanager, however, It is keeping print such log like: "Error on notify: EOF" source="notify.go:283" and "Notify for 3 alerts failed: EOF" source="dispatch.go:261". My alertmanager config is like below:
smtp_smarthost: 'smtp.xxx.com:xxx'
smtp_from: 'xxxxx#xxx.com'
smtp_auth_username: 'xxxx#xxx.com'
smtp_auth_password: 'xxxxxxx'
smtp_require_tls: false
route:
group_by: ['instance']
group_wait: 30s
group_interval: 5m
repeat_interval: 7m
receiver: email
routes:
- match:
severity: critical
receiver: email
- match_re:
severity: ^(warning|critical)$
receiver: support_team
receivers:
- name: 'email'
email_configs:
- to: 'xxxxxx#xx.com'
- name: 'support_team'
email_configs:
- to: 'xxxxxx#xx.com'
- name: 'pager'
email_configs:
- to: 'alert-pager#example.com'
Any suggest?
I use smtp.xxx.com:587 fixed the issue,but also need to set smtp_require_tls: true