my alertmanagerconfigs:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: configlinkflowalertmanager
labels:
alertmanagerConfig: linkflowAlertmanager
spec:
route:
groupBy: ['alertname']
groupWait: 30s
groupInterval: 5m
repeatInterval: 12h
receiver: 'webhook'
matchers:
- name: alertname
value: KubePodCrashLooping
- name: namespace
value: linkflow
receivers:
- name: 'webhook'
webhookConfigs:
- url: 'http://xxxxx:1194/'
the web shows: namespace become monitoring ? why? and alerts only in monitoring can send out
can I send other namespace or all namespace alerts?
route:
receiver: Default
group_by:
- namespace
continue: false
routes:
- receiver: monitoring-configlinkflowalertmanager-webhook
group_by:
- namespace
match:
alertname: KubePodCrashLooping
namespace: monitoring
continue: true
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
This is a feature:
That's kind of the point of the feature, otherwise it's possible that alertmanager configs in different namespaces conflict and Alertmanager won't be able to start.
There is an Issue (#3737) to make namespace label matching optional / configurable. The related PR still has to be merged (as of Today), but it will allow you to define global alerts.
Related
I got following yaml for configmap for AlertAManger but it is not sending mail. I verified the smpt settings are working on another script
kind: ConfigMap
apiVersion: v1
metadata:
name: alertmanager-config
namespace: monitoring
data:
config.yml: |-
global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'AlertManager#xxx.com'
smtp_auth_username: 'alertmanager#gmail.com'
smtp_auth_password: 'xxxxxxxx'
templates:
- '/etc/alertmanager/*.tmpl'
route:
receiver: alert-emailer
group_by: ['alertname', 'priority']
group_wait: 10s
repeat_interval: 30m
routes:
- receiver: slack_demo
# Send severity=slack alerts to slack.
match:
severity: slack
group_wait: 10s
repeat_interval: 1m
receivers:
- name: alert-emailer
email_configs:
- to: alertmanager#gmail.com
send_resolved: false
from: alertmanager#gmail.com
smarthost: smtp.gmail.com:587
require_tls: false
- name: slack_demo
slack_configs:
- api_url: https://hooks.slack.com/services/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sz4Qk
channel: '#xxxxxxxx'%
any idea why it is not working?
When I enable the alertmanager a secret gets created with name alertmanager-{chartName}-alertmanager. But no pods or statefulset of alertmanager gets created.
When I delete this secret with kubectl delete and upgrade the chart again, then new secrets get created alertmanager-{chartName}-alertmanager , alertmanager-{chartName}-alertmanager-generated. In this case i can see the pods and statefulset of alertmanager. But the -generated secret has default values which are null. This secret alertmanager-{chartName}-alertmanager has updated configuration.
Checked the alertmanager.yml with amtool and it shows valid.
Chart - kube-prometheus-stack-36.2.0
#Configuration in my values.yaml
alertmanager:
enabled: true
global:
resolve_timeout: 5m
smtp_require_tls: false
route:
receiver: 'email'
receivers:
- name: 'null'
- name: 'email'
email_configs:
- to: xyz#gmail.com
from: abc#gmail.com
smarthost: x.x.x.x:25
send_resolved: true
#Configuration from the secret alertmanager-{chartName}-alertmanager
global:
resolve_timeout: 5m
smtp_require_tls: false
inhibit_rules:
- equal:
- namespace
- alertname
source_matchers:
- severity = critical
target_matchers:
- severity =~ warning|info
- equal:
- namespace
- alertname
source_matchers:
- severity = warning
target_matchers:
- severity = info
- equal:
- namespace
source_matchers:
- alertname = InfoInhibitor
target_matchers:
- severity = info
receivers:
- name: "null"
- email_configs:
- from: abc#gmail.com
send_resolved: true
smarthost: x.x.x.x:25
to: xyz#gmail.com
name: email
route:
group_by:
- namespace
group_interval: 5m
group_wait: 30s
receiver: email
repeat_interval: 12h
routes:
- matchers:
- alertname =~ "InfoInhibitor|Watchdog"
receiver: "null"
templates:
- /etc/alertmanager/config/*.tmpl
I have deployed an Event Hub triggered Azure Function written in Java on AKS. The function should scale out using KEDA.
The function is correctly triggerd and working but it's not scaling out when the load increases. I have added sleep calls to the function implementation to make sure it's not burning through the events too fast and should be forced to scale out but this did not show any change as well.
kubectl get hpa shows the following output
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-hpa-eventlogger Deployment/eventlogger 64/64 (avg) 1 20 1 3m41s
This seems to be a first indicator that something is not correct as i assume the first number in the targets column is the number of unprocessed events in event hub. This stays the same no matter how many events i pump into the hub.
The Function was deployed using the following Kubernetes Deployment Manifest
data:
AzureWebJobsStorage: <removed>
FUNCTIONS_WORKER_RUNTIME: amF2YQ==
EventHubConnectionString: <removed>
apiVersion: v1
kind: Secret
metadata:
name: eventlogger
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: eventlogger
labels:
app: eventlogger
spec:
selector:
matchLabels:
app: eventlogger
template:
metadata:
labels:
app: eventlogger
spec:
containers:
- name: eventlogger
image: <removed>
env:
- name: AzureFunctionsJobHost__functions__0
value: eventloggerHandler
envFrom:
- secretRef:
name: eventlogger
readinessProbe:
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 240
httpGet:
path: /
port: 80
scheme: HTTP
startupProbe:
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 240
httpGet:
path: /
port: 80
scheme: HTTP
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: eventlogger
labels:
app: eventlogger
spec:
scaleTargetRef:
name: eventlogger
pollingInterval: 5
cooldownPeriod: 5
minReplicaCount: 0
maxReplicaCount: 20
triggers:
- type: azure-eventhub
metadata:
storageConnectionFromEnv: AzureWebJobsStorage
connectionFromEnv: EventHubConnectionString
---
The Connection String of Event Hub contains the "EntityPath=" Section as described in the KEDA Event Hub Scaler Documentation and has Manage-Permissions on the Event Hub Namespace.
The output of kubectl describe ScaledObject is
Name: eventlogger
Namespace: default
Labels: app=eventlogger
scaledobject.keda.sh/name=eventlogger
Annotations: <none>
API Version: keda.sh/v1alpha1
Kind: ScaledObject
Metadata:
Creation Timestamp: 2022-04-17T10:30:36Z
Finalizers:
finalizer.keda.sh
Generation: 1
Managed Fields:
API Version: keda.sh/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:labels:
.:
f:app:
f:spec:
.:
f:cooldownPeriod:
f:maxReplicaCount:
f:minReplicaCount:
f:pollingInterval:
f:scaleTargetRef:
.:
f:name:
f:triggers:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-04-17T10:30:36Z
API Version: keda.sh/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"finalizer.keda.sh":
f:labels:
f:scaledobject.keda.sh/name:
f:status:
.:
f:conditions:
f:externalMetricNames:
f:lastActiveTime:
f:originalReplicaCount:
f:scaleTargetGVKR:
.:
f:group:
f:kind:
f:resource:
f:version:
f:scaleTargetKind:
Manager: keda
Operation: Update
Time: 2022-04-17T10:30:37Z
Resource Version: 1775052
UID: 3b6a68c1-c3b9-4cdf-b5d5-41a9721ac661
Spec:
Cooldown Period: 5
Max Replica Count: 20
Min Replica Count: 0
Polling Interval: 5
Scale Target Ref:
Name: eventlogger
Triggers:
Metadata:
Connection From Env: EventHubConnectionString
Storage Connection From Env: AzureWebJobsStorage
Type: azure-eventhub
Status:
Conditions:
Message: ScaledObject is defined correctly and is ready for scaling
Reason: ScaledObjectReady
Status: False
Type: Ready
Message: Scaling is performed because triggers are active
Reason: ScalerActive
Status: True
Type: Active
Status: Unknown
Type: Fallback
External Metric Names:
s0-azure-eventhub-$Default
Last Active Time: 2022-04-17T10:30:47Z
Original Replica Count: 1
Scale Target GVKR:
Group: apps
Kind: Deployment
Resource: deployments
Version: v1
Scale Target Kind: apps/v1.Deployment
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal KEDAScalersStarted 10s keda-operator Started scalers watch
Normal ScaledObjectReady 10s keda-operator ScaledObject is ready for scaling
So i'm a bit stucked as i don't see any errors but it's still not behaving as expected.
Versions:
Kubernetes version: 1.21.9
KEDA Version: 2.6.1 installed using kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.6.1/keda-2.6.1.yaml
Azure Functions using Java 11 and extensionBundle in host.json is configured using version [2.8.4, 3.0.0)
Was able to find a solution to the problem.
Event Hub triggered Azure Functions deployed on AKS show the same scaling characteristics as Azure Functions on App Service show:
You only get one consumer per partition to allow for ordering per partition.
This characteristic overrules maxReplicaCount in the Kubernetes Deployment Manifest.
So to solve my own issue: By increasing the partitions for Event Hub i get a pod per partition and KEDA scales the workload as expected.
I have a CronWorkflow that sends the following metric:
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
name: my-cron-wf
spec:
schedule: "0 * * * *"
suspend: false
workflowSpec:
metrics:
prometheus:
- name: wf_exec_duration_gauge
help: "Duration gauge by workflow name and status"
labels:
- key: name
value: my-cron-wf
- key: status
value: "{{workflow.status}}"
gauge:
value: "{{workflow.duration}}"
I would like to populate the metric's label name with the CronWorkflow name using a variable in order to avoid copying it but I didn't find a variable for it.
I tried to use {{workflow.name}} but it equals to the generated workflow name and not to the desired CronWorkflow name.
I use Kustomize to manage argo workflows resources so if there is a kustomize-way to achieve this it would be great as well.
Argo Workflows automatically adds the name of the Cron Workflow as a label on the workflow. That label is accessible as a variable.
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
name: my-cron-wf
spec:
schedule: "0 * * * *"
suspend: false
workflowSpec:
metrics:
prometheus:
- name: wf_exec_duration_gauge
help: "Duration gauge by workflow name and status"
labels:
- key: name
value: "{{workflow.labels.workflows.argoproj.io/cron-workflow}}"
- key: status
value: "{{workflow.status}}"
gauge:
value: "{{workflow.duration}}"
My current Kafka deployment file with 3 Kafka brokers looks like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
serviceName: kafka-headless
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka-instance
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181,\
zookeeper-1.zookeeper-headless.default.svc.cluster.local:2181,\
zookeeper-2.zookeeper-headless.default.svc.cluster.local:2181"
- name: BROKER_ID_COMMAND
value: "hostname | awk -F '-' '{print $2}'"
- name: KAFKA_CREATE_TOPICS
value: hello:2:1
volumeMounts:
- name: data
mountPath: /var/lib/kafka/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 50Gi
This creates 3 Kafka brokers as a Stateful Set and connects to the Zookeeper cluster using the Kubedns service with FQDN (Fully Qualified Domain Names) such as:
zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181
Broker IDs are generated based on the pod name:
- name: BROKER_ID_COMMAND
value: "hostname | awk -F '-' '{print $2}'"
Result:
kafka-0 = 0
kafka-1 = 1
kafka-2 = 2
However, In order to use the Kubedns names for the Kafka brokers:
kafka-0.kafka-headless.default.svc.cluster.local:9092
kafka-1.kafka-headless.default.svc.cluster.local:9092
kafka-2.kafka-headless.default.svc.cluster.local:9092
I need to be able to set the KAFKA_ADVERTISED_HOST_NAME variable to the above FQDN values based on the name of the pod.
Currently I have the variable set to the name of the pod:
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Result:
KAFKA_ADVERTISED_HOST_NAME=kafka-0
KAFKA_ADVERTISED_HOST_NAME=kafka-1
KAFKA_ADVERTISED_HOST_NAME=kafka-2
But somehow I would need to append the rest of the DNS name.
Is there a way I could set the DNS value directly?
Something like that:
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: kubedns.name
I managed to solve the problem with a command field inside the pod definition:
command:
- sh
- -c
- "export KAFKA_ADVERTISED_HOST_NAME=$(hostname).kafka-headless.default.svc.cluster.local &&
start-kafka.sh"
This runs a shell command which exports the advertised hostname environment variable based on the hostname value.
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KAFKA_ZOOKEEPER_CONNECT
value: zook-zookeeper.zook.svc.cluster.local:2181
- name: KAFKA_PORT_NUMBER
value: "9092"
- name: KAFKA_LISTENERS
value: SASL_SSL://:$(KAFKA_PORT_NUMBER)
- name: KAFKA_ADVERTISED_LISTENERS
value: SASL_SSL://$(MY_POD_NAME).kafka-kafka-headless.kafka.svc.cluster.local:$(KAFKA_PORT_NUMBER)
The above config would create your FQDN.
You should be able to see those names in your Kafka logs when Kafka server starts.
NOTE: Kubernetes allows you to reference environment variables using the syntax $(VARIABLE)
None of the above worked for me; my setup it wurstmeister/kafka:2.12-2.5.0 and wurstmeister/zookeeper:3.4.6 in a single pod on Kubernetes 1.16 (don't ask); ClusterIp service on top which forwards 9092 to the Kafka container.
This set of environment variables works:
- name: KAFKA_LISTENERS
value: "INSIDE://:9094,OUTSIDE://:9092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://:9094,OUTSIDE://my-service.my-namespace.svc.cluster.local:9092"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT" # not production-ready!
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: INSIDE
- name: KAFKA_ZOOKEEPER_CONNECT
value: "localhost:2181" # since it's in the same pod
Sources: wurstmeister/kafka doc, Kafka doc
The inherent problem seems to be that Kafka itself needs to be an IP-ish thing to bind to and to talk to itself via, while clients need a DNS-ish name to connect to from the outside. The latter one can't contain the pod name for some reason. (Might be a separate configuration issue on my end.)