I try to control the severity level of PagerDuty alerts using configuration of Alertmanager.
I hard-coded the severity level to warning in the receiver of Alertmanager:
- name: 'whatever_pd_service'
pagerduty_configs:
- send_resolved: true
service_key: SERVICE_KEY
url: https://events.pagerduty.com/v2/enqueue
client: '{{ template "pagerduty.default.client" . }}'
client_url: '{{ template "pagerduty.default.clientURL" . }}'
severity: 'warning'
description: '{{ (index .Alerts 0).Annotations.summary }}'
details:
firing: '{{ template "pagerduty.default.instances" .Alerts.Firing }}'
information: '{{ range .Alerts }}{{ .Annotations.information }}
{{ end }}'
num_firing: '{{ .Alerts.Firing | len }}'
num_resolved: '{{ .Alerts.Resolved | len }}'
resolved: '{{ template "pagerduty.default.instances" .Alerts.Resolved }}'
but still in the alerts generated, the Severity level was set to critical:
Is there a way to set the Severity level in PagerDuty?
Found out the reason why the severity field in Alertmanager receiver configuration is not working - we are using a Prometheus (Events API v1) integration in the PagerDuty Service, and according to the specification of PD Events API v1 (https://developer.pagerduty.com/docs/ZG9jOjExMDI5NTc4-send-a-v1-event), there is no severity field.
So there are two ways to solve this problem (achieve Dynamic Notification for PagerDuty) - either use Events API v2, or use service orchestration (https://support.pagerduty.com/docs/event-orchestration#service-orchestrations)
Related
I have a role to setup NATS cluster,, I've used host_vars to define which node is the master node like below:
is_master: true
Then in the setup-nats.yml task, I used the following to extract the master node's IP address based on the host_var I've set and then used it as a variable for the Jinja2 template, however, the variable doesn't get passed down to the template and I get the `variable 'master_ip' is undefined.
- name: Set master IP
set_fact:
set_master_ip: "{{ ansible_facts['default_ipv4']['address'] }}"
cacheable: yes
when: is_master
- name: debug
debug:
msg: "{{ set_master_ip }}"
run_once: true
- name: generate nats-server.conf for the slave nodes
template:
src: nats-server-slave.conf.j2
dest: /etc/nats-server.conf
owner: nats
group: nats
mode: 0644
when:
- is_master == false
vars:
master_ip: "{{ set_master_ip }}"
notify: nats-server
The variable is used like below in the Jinja2 template:
routes = [
nats-route://ruser:{{ nats_server_password }}#{{ master_ip }}:6222
]
}
Questions:
Is this approach according to the best practices?
What is the correct way of doing the above so the variable is passed down to the template?
Test Output:
I'm using Molecule to test my Ansible and even though in the debug task the IP address is visible, it doesn't get passed down to the template:
TASK [nats : Set master IP] ****************************************************
ok: [target1]
skipping: [target2]
skipping: [target3]
TASK [nats : debug] ************************************************************
ok: [target1] =>
msg: 10.0.2.15
TASK [nats : generate nats-server.conf for the slave nodes] ********************
skipping: [target1]
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: {{ set_master_ip }}: 'set_master_ip' is undefined
fatal: [target2]: FAILED! => changed=false
msg: 'AnsibleUndefinedVariable: {{ set_master_ip }}: ''set_master_ip'' is undefined'
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: {{ set_master_ip }}: 'set_master_ip' is undefined
fatal: [target3]: FAILED! => changed=false
msg: 'AnsibleUndefinedVariable: {{ set_master_ip }}: ''set_master_ip'' is undefined'
Any help is appreciated, thanks in advance.
UPDATE: I suspect the issue has something to do with the variable scope being in the host context but cannot find a way to fix it ( I might be wrong though).
Far from being best practice IMO but answering your direct question. Your problem is not passing the variable to your template but the fact it is not assigned to all hosts in your play loop (and hence is undefined on any non master node). The following (untested) addresses that issue keeping the same task structure.
- name: Set master IP for all nodes
ansible.builtin.set_fact:
master_ip: "{{ hostvars | dict2items | map(attribute='value'
| selectattr('is_master', 'defined') | selectattr('is_master')
| map(attribute='ansible_facts.default_ipv4.address') | first }}"
cacheable: yes
run_once: true
- name: Show calculated master IP (making sure it is assigned everywhere)
ansible.builtin.debug:
msg: "{{ master_ip }}"
- name: generate nats-server.conf for the slave nodes
ansible.builtin.template:
src: nats-server-slave.conf.j2
dest: /etc/nats-server.conf
owner: nats
group: nats
mode: 0644
when: not is_master | bool
notify: nats-server
Ideas for enhancement (non exhaustive):
Select your master based on a group membership in the inventory rather than on a host attribute. This makes gathering the ip easier (e.g. master_ip: "{{ hostvars[groups.master | first].ansible_facts.default_ipv4.address }}"
Set the ip as a play var or directly inside the inventory for the node group rather than in a set_fact task.
I was trying to parse a XML response to JSON using parse_xml from an AWS VPN output, but I'm getting the below error.
I am able to parse data with "vpn_conn_facts.vpn_connections[0].customer_gateway_configuration" but, it is when I apply the filter parse_xml, that the error arise.
fatal: [localhost]: FAILED! => {"msg": "Unexpected templating type error occurred on ({{ vpn_conn_facts.vpn_connections[0].customer_gateway_configuration | parse_xml('aws_vpn_parser.yaml') }}): 'NoneType' object is not subscriptable"}
The playbook:
- hosts: '{{ PALO_HOST | default("localhost") }}'
connection: local
gather_facts: true
collections:
- paloaltonetworks.panos
tasks:
- name: load var
include_vars: provider.yaml
- name: load aws var
include_vars: /etc/ansible/aws/vpn_facts.yaml
- name: load variable dir
include_vars:
dir: /etc/ansible/aws/vars/
- name: aws_vpn connection info
ec2_vpc_vpn_info:
vpn_connection_ids: '{{ vpn_id }}'
region: "{{ region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
register: vpn_conn_facts
- name: set_fact
set_fact:
parsed: "{{ vpn_conn_facts.vpn_connections[0].customer_gateway_configuration
| parse_xml('aws_vpn_parser.yaml') }}"
- debug:
msg: '{{ parsed }}'
output of "vpn_conn_facts.vpn_connections[0].customer_gateway_configuration"
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<vpn_connection id=\"vpn-395e9883880\">\n <customer_gateway_id>cgw-136da58e954a</customer_gateway_id>\n <vpn_gateway_id>vgw-015c18a444e89</vpn_gateway_id>\n <vpn_connection_type>ipsec.1</vpn_connection_type>\n <vpn_connection_attributes>NoBGPVPNConnection</vpn_connection_attributes>\n <ipsec_tunnel>\n <customer_gateway>\n <tunnel_outside_address>\n <ip_address>131.226.22.251</ip_address>\n </tunnel_outside_address>\n <tunnel_inside_address>\n <ip_address>169.254.254.58</ip_address>\n <network_mask>255.255.255.252</network_mask>\n <network_cidr>30</network_cidr>\n </tunnel_inside_address>\n </customer_gateway>\n <vpn_gateway>\n <tunnel_outside_address>\n <ip_address>34.232.238.139</ip_address>\n </tunnel_outside_address>\n <tunnel_inside_address>\n <ip_address>169.254.254.57</ip_address>\n <network_mask>255.255.255.252</network_mask>\n <network_cidr>30</network_cidr>\n </tunnel_inside_address>\n </vpn_gateway>\n <ike>\n <authentication_protocol>sha1</authentication_protocol>\n <encryption_protocol>aes-128-cbc</encryption_protocol>\n <lifetime>28800</lifetime>\n <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n <mode>main</mode>\n <pre_shared_key>JXzQgDDNG944e0nnh4w</pre_shared_key>\n </ike>\n <ipsec>\n <protocol>esp</protocol>\n <authentication_protocol>hmac-sha1-96</authentication_protocol>\n <encryption_protocol>aes-128-cbc</encryption_protocol>\n <lifetime>3600</lifetime>\n <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n <mode>tunnel</mode>\n <clear_df_bit>true</clear_df_bit>\n <fragmentation_before_encryption>true</fragmentation_before_encryption>\n <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n <dead_peer_detection>\n <interval>10</interval>\n <retries>3</retries>\n </dead_peer_detection>\n </ipsec>\n </ipsec_tunnel>\n <ipsec_tunnel>\n <customer_gateway>\n <tunnel_outside_address>\n <ip_address>131.226.22.231</ip_address>\n </tunnel_outside_address>\n <tunnel_inside_address>\n <ip_address>169.254.207.46</ip_address>\n <network_mask>255.255.255.252</network_mask>\n <network_cidr>30</network_cidr>\n </tunnel_inside_address>\n </customer_gateway>\n <vpn_gateway>\n <tunnel_outside_address>\n <ip_address>54.225.7.92</ip_address>\n </tunnel_outside_address>\n <tunnel_inside_address>\n <ip_address>169.254.207.45</ip_address>\n <network_mask>255.255.255.252</network_mask>\n <network_cidr>30</network_cidr>\n </tunnel_inside_address>\n </vpn_gateway>\n <ike>\n <authentication_protocol>sha1</authentication_protocol>\n <encryption_protocol>aes-128-cbc</encryption_protocol>\n <lifetime>28800</lifetime>\n <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n <mode>main</mode>\n <pre_shared_key>RDt7vieaxRkjUwaCJ8M8Lo.Qztdhhfdq</pre_shared_key>\n </ike>\n <ipsec>\n <protocol>esp</protocol>\n <authentication_protocol>hmac-sha1-96</authentication_protocol>\n <encryption_protocol>aes-128-cbc</encryption_protocol>\n <lifetime>3600</lifetime>\n <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n <mode>tunnel</mode>\n <clear_df_bit>true</clear_df_bit>\n <fragmentation_before_encryption>true</fragmentation_before_encryption>\n <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n <dead_peer_detection>\n <interval>10</interval>\n <retries>3</retries>\n </dead_peer_detection>\n </ipsec>\n </ipsec_tunnel>\n</vpn_connection>"
I would like to add a server to an ausostaling-group using SSM document, if the group has n instances running - i want to have (n+1).
Since this stack is managed by cloudformation, i just need to increase the 'DesiredCapacity' variable and update the stack. so i created a document with 2 steps:
get the current value of 'DesiredCapacity'
update stack with value of 'DesiredCapacity' + 1
I didnt find a way to express this simple operation, i guess im doing something wrong ...
SSM Document:
schemaVersion: '0.3'
parameters:
cfnStack:
description: 'The cloudformation stack to be updated'
type: String
mainSteps:
- name: GetDesiredCount
action: 'aws:executeAwsApi'
inputs:
Service: cloudformation
Api: DescribeStacks
StackName: '{{ cfnStack }}'
outputs:
- Selector: '$.Stacks[0].Outputs.DesiredCapacity'
Type: String
Name: DesiredCapacity
- name: UpdateCloudFormationStack
action: 'aws:executeAwsApi'
inputs:
Service: cloudformation
Api: UpdateStack
StackName: '{{ cfnStack }}'
UsePreviousTemplate: true
Parameters:
- ParameterKey: WebServerCapacity
ParameterValue: 'GetDesiredCount.DesiredCapacity' + 1 ### ERROR
# ParameterValue: '{{GetDesiredCount.DesiredCapacity}}' + 1 ### ERROR (trying to concat STR to INT)
# ParameterValue: '{{ GetDesiredCount.DesiredCapacity + 1}}' ### ERROR
There is a way to do calculation inside an SSM document using python runtime.
The additional python step do the following:
Python runtime get variables via the the 'InputPayload' property
The 'current' (str) key added to the event object
The python function script_handler called
The 'current' extracted using event['current']
Converting string to int and adding 1
return a dictionary with the 'desired_capacity' key and value as string
expose the output ($.Payload.desired_capacity referred to the 'desired_capacity' of the returned dictionary)
schemaVersion: '0.3'
parameters:
cfnStack:
description: 'The cloudformation stack to be updated'
type: String
mainSteps:
- name: GetDesiredCount
action: 'aws:executeAwsApi'
inputs:
Service: cloudformation
Api: DescribeStacks
StackName: '{{ cfnStack }}'
outputs:
- Selector: '$.Stacks[0].Outputs.DesiredCapacity'
Type: String
Name: DesiredCapacity
- name: Calculate
action: 'aws:executeScript'
inputs:
Runtime: python3.6
Handler: script_handler
Script: |-
def script_handler(events, context):
desired_capacity = int(events['current']) + 1
return {'desired_capacity': str(desired_capacity)}
InputPayload:
current: '{{ GetDesiredCount.DesiredCapacity }}'
outputs:
- Selector: $.Payload.desired_capacity
Type: String
Name: NewDesiredCapacity
- name: UpdateCloudFormationStack
action: 'aws:executeAwsApi'
inputs:
Service: cloudformation
Api: UpdateStack
StackName: '{{ cfnStack }}'
UsePreviousTemplate: true
Parameters:
- ParameterKey: WebServerCapacity
ParameterValue: '{{ Calculate.NewDesiredCapacity}}'
I have this playbook that is working fine, but not as I need it and I cannot find the problem.
The playbook should allow me to change permissions to a filesystem recursively or else to a particular file.
- name: Playbook to change file and directory permissions
hosts: '{{ target_hosts }}'
vars:
PATH: '{{ target_path }}'
PERMISSIONS: '{{ number }}'
OWNER: '{{ target_owner }}'
GROUP: '{{ target_group }}'
tasks:
- name: Checking that it is not a system mount point
fail:
msg: "Changing permissions on system fs is not allowed"
when: PATH in ["/etc", "/var", "/tmp", "/usr", "/", "/opt", "/home", "/boot"]
- name: Checking if the path is a file or a filesystem
stat:
path: '{{ PATH }}'
register: path_status
- name: Applying permissions on the filesystem
block:
- name: Report if directory exists
debug:
msg: "Directory {{ PATH }} is present on the server"
when: path_status.stat.exists
- name: Applying permissions recursively
file:
path: '{{ PATH }}'
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
recurse: yes
when: path_status.stat.isdir is defined and path_status.stat.isdir
- name: Applying permissions on the file
block:
- name: Report if file exists
debug:
msg: "File {{ PATH }} is present on the server"
when: path_status.stat.exists
- name: Applying permissions
file:
path: '{{ PATH }}'
state: file
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
when: path_status.stat.isreg is defined and path_status.stat.isreg
The first 2 tasks
Verify that it is not a system filesystem
Using the Ansible stat module, I register the path that is being
passed as a parameter of the PATH variable
When I execute just passing a filesystem like in the following example
ansible-playbook change_fs_permissions.yml -e "target_hosts=centoslabs target_path=/etc number=755 target_owner=root target_group=testing"
the execution ends because it's a system mount point. (What do I expect)
But if I enter something like /tmp/somefile.txt as a parameter of the PATH variable, my idea is that the playbook will fail again since it cannot change anything within that filesystem but it does not continue executing and changes the permissions.
They will see that I use the BLOCK module since it seemed to me the best so that if a filesystem is passed to it, it executes those tasks and if it is a file it executes the others.
Can you give me some ideas on how to approach this problem?
Simplify the conditions
when: path_status.stat.isdir is defined and path_status.stat.isdir
when: path_status.stat.isreg is defined and path_status.stat.isreg
Do not test whether the attribute is defined or not and set the default to false instead. This will skip the situations when PATH is neither a directory nor a regular file.
when: path_status.stat.isdir|default(false)
when: path_status.stat.isreg|default(false)
In this case, you can also omit to test the PATH existence because if it does not exist the block will be skipped anyway
when: path_status.stat.exists
Try this
- name: Applying permissions on the filesystem
block:
- name: Report if directory exists
debug:
msg: "Directory {{ PATH }} is present on the server"
- name: Applying permissions recursively
file:
path: '{{ PATH }}'
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
recurse: true
when: path_status.stat.isdir|default(false)
- name: Applying permissions on the file
block:
- name: Report if file exists
debug:
msg: "File {{ PATH }} is present on the server"
- name: Applying permissions
file:
path: '{{ PATH }}'
state: file
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
when: path_status.stat.isreg|default(false)
I did it and it's working now.
What I did was change this part:
- name: Checking that it is not a system mount point
fail:
msg: "Changing permissions on system fs is not allowed"
when: PATH in ["/etc", "/var", "/tmp", "/usr", "/", "/opt", "/home", "/boot"]
For:
- name: Checking that it is not a system mount point
fail:
msg: "Changing permissions on system fs is not allowed"
when: PATH.split('/')[1] in ["/etc", "/var", "/tmp", "/usr", "/", "/opt", "/home", "/boot"]
I have a ansible playbook which refers to ssh key data for adding the public key to the authorized_host file when it is created, here is an extract.
vars:
vm1:
ssh_key_var: '{{ ssh_key_data }}'
tasks:
- name: Create VM
azure_rm_virtualmachine:
resource_group: '{{ resource_group1.name }}'
name: '{{ vm1.name }}'
vm_size: '{{ vm1.size }}'
admin_username: '{{ vm1.admin_username }}'
ssh_password_enabled: false
ssh_public_keys:
- path: '/home/{{ vm1.admin_username }}/.ssh/authorized_keys'
key_data: '{{ vm1.ssh_key_var }}'
network_interfaces: '{{ network_interface1.name }}'
image: '{{ vm1.image }}'
Normally this is pretty straight forward - I'd run on my laptop and have key locally, maybe get the data referred to as a file.
I tried running the playbook with secret text in a jenkins pipeline as using an environmental variable secret text "AZURE_AUTHORIZED_KEY" for the public key, which I store in credentials:
stage('Deploy server') {
agent {
docker { image 'my_ansible_container:latest' }
}
environment {
AZURE_CLIENT_ID = credentials('AZURE_CLIENT_ID_ANSIBLE')
AZURE_SECRET = credentials('AZURE_SECRET_ANSIBLE')
AZURE_SUBSCRIPTION_ID = credentials('AZURE_SUBSCRIPTION_ID_ANSIBLE')
AZURE_TENANT = credentials('AZURE_TENANT_ANSIBLE')
AUTHORIZED_KEY = credentials('AZURE_AUTHORIZED_KEY')
}
steps {
// deploy server
sh "ansible-playbook playbook.yml --extravars \"ssh_key_data=${AUTHORIZED_KEY}\""
}
}
When I add the public key as a var in the playbook it all works fine, but I dont want to store keys in the repo, even if they are public keys and it's a private repo.
When I import as env_var it does not seem to take the value and 'cascade' it in to the vars as expected. Anyone have a solution to this kind of problem - is my approach wrong?
Thanks
There was nothing wrong, expect typing errors, bit of closing quotes also. Here is the syntax for those who may be interested. Note I am using this as a intermediate solution where I want to move secrets out of jenkins and into something like hashicorp vault.
I also renamed some of my env vars to be a bit more, representative:
}
environment {
AZURE_CLIENT_ID = credentials('AZURE_CLIENT_ID_ANSIBLE')
AZURE_SECRET = credentials('AZURE_SECRET_ANSIBLE')
AZURE_SUBSCRIPTION_ID = credentials('AZURE_SUBSCRIPTION_ID_ANSIBLE')
AZURE_TENANT = credentials('AZURE_TENANT_ANSIBLE')
AUTHORIZED_KEY = credentials('AZURE_AUTHORIZED_KEY')
AUTHORIZED_PASSWORD = credentials('AZURE_AUTHORIZED_PASSWORD')
}
steps {
// deploy a boot strap server
sh "ansible-playbook playbook.yml \
--extra-var 'admin_password_var=${AUTHORIZED_PASSWORD}' \
--extra-var 'ssh_public_key_var=${AUTHORIZED_KEY}'"
}
and an extract of the playbook
vars:
vm1:
admin_password: '{{ admin_password_var }}'
ssh_public_key: '{{ ssh_public_key_var }}'
- name: Create VM
azure_rm_virtualmachine:
resource_group: '{{ resource_group1.name }}'
name: '{{ vm1.name }}'
vm_size: '{{ vm1.size }}'
admin_username: '{{ vm1.admin_username }}'
admin_password: '{{ vm1.admin_password }}'
ssh_password_enabled: false
ssh_public_keys:
- path: '/home/{{ vm1.admin_username }}/.ssh/authorized_keys'
key_data: '{{ vm1.ssh_public_key }}'
network_interfaces: '{{ network_interface1.name }}'
image: '{{ vm1.image }}'
Note I have chosen variables that may appear duplicative, but I prefer the approach given it allows me to trace back the source and not get confused which one is being used.
There may be other approaches, but this is a working one, and simple as well; which strikes me as an attractive combination!