Jenkins pipeline - refering to SSH Keys in ansible and Terraform - azure

I have a ansible playbook which refers to ssh key data for adding the public key to the authorized_host file when it is created, here is an extract.
vars:
vm1:
ssh_key_var: '{{ ssh_key_data }}'
tasks:
- name: Create VM
azure_rm_virtualmachine:
resource_group: '{{ resource_group1.name }}'
name: '{{ vm1.name }}'
vm_size: '{{ vm1.size }}'
admin_username: '{{ vm1.admin_username }}'
ssh_password_enabled: false
ssh_public_keys:
- path: '/home/{{ vm1.admin_username }}/.ssh/authorized_keys'
key_data: '{{ vm1.ssh_key_var }}'
network_interfaces: '{{ network_interface1.name }}'
image: '{{ vm1.image }}'
Normally this is pretty straight forward - I'd run on my laptop and have key locally, maybe get the data referred to as a file.
I tried running the playbook with secret text in a jenkins pipeline as using an environmental variable secret text "AZURE_AUTHORIZED_KEY" for the public key, which I store in credentials:
stage('Deploy server') {
agent {
docker { image 'my_ansible_container:latest' }
}
environment {
AZURE_CLIENT_ID = credentials('AZURE_CLIENT_ID_ANSIBLE')
AZURE_SECRET = credentials('AZURE_SECRET_ANSIBLE')
AZURE_SUBSCRIPTION_ID = credentials('AZURE_SUBSCRIPTION_ID_ANSIBLE')
AZURE_TENANT = credentials('AZURE_TENANT_ANSIBLE')
AUTHORIZED_KEY = credentials('AZURE_AUTHORIZED_KEY')
}
steps {
// deploy server
sh "ansible-playbook playbook.yml --extravars \"ssh_key_data=${AUTHORIZED_KEY}\""
}
}
When I add the public key as a var in the playbook it all works fine, but I dont want to store keys in the repo, even if they are public keys and it's a private repo.
When I import as env_var it does not seem to take the value and 'cascade' it in to the vars as expected. Anyone have a solution to this kind of problem - is my approach wrong?
Thanks

There was nothing wrong, expect typing errors, bit of closing quotes also. Here is the syntax for those who may be interested. Note I am using this as a intermediate solution where I want to move secrets out of jenkins and into something like hashicorp vault.
I also renamed some of my env vars to be a bit more, representative:
}
environment {
AZURE_CLIENT_ID = credentials('AZURE_CLIENT_ID_ANSIBLE')
AZURE_SECRET = credentials('AZURE_SECRET_ANSIBLE')
AZURE_SUBSCRIPTION_ID = credentials('AZURE_SUBSCRIPTION_ID_ANSIBLE')
AZURE_TENANT = credentials('AZURE_TENANT_ANSIBLE')
AUTHORIZED_KEY = credentials('AZURE_AUTHORIZED_KEY')
AUTHORIZED_PASSWORD = credentials('AZURE_AUTHORIZED_PASSWORD')
}
steps {
// deploy a boot strap server
sh "ansible-playbook playbook.yml \
--extra-var 'admin_password_var=${AUTHORIZED_PASSWORD}' \
--extra-var 'ssh_public_key_var=${AUTHORIZED_KEY}'"
}
and an extract of the playbook
vars:
vm1:
admin_password: '{{ admin_password_var }}'
ssh_public_key: '{{ ssh_public_key_var }}'
- name: Create VM
azure_rm_virtualmachine:
resource_group: '{{ resource_group1.name }}'
name: '{{ vm1.name }}'
vm_size: '{{ vm1.size }}'
admin_username: '{{ vm1.admin_username }}'
admin_password: '{{ vm1.admin_password }}'
ssh_password_enabled: false
ssh_public_keys:
- path: '/home/{{ vm1.admin_username }}/.ssh/authorized_keys'
key_data: '{{ vm1.ssh_public_key }}'
network_interfaces: '{{ network_interface1.name }}'
image: '{{ vm1.image }}'
Note I have chosen variables that may appear duplicative, but I prefer the approach given it allows me to trace back the source and not get confused which one is being used.
There may be other approaches, but this is a working one, and simple as well; which strikes me as an attractive combination!

Related

XML to JSON Parsing Error with Ansible using Parse_xml

I was trying to parse a XML response to JSON using parse_xml from an AWS VPN output, but I'm getting the below error.
I am able to parse data with "vpn_conn_facts.vpn_connections[0].customer_gateway_configuration" but, it is when I apply the filter parse_xml, that the error arise.
fatal: [localhost]: FAILED! => {"msg": "Unexpected templating type error occurred on ({{ vpn_conn_facts.vpn_connections[0].customer_gateway_configuration | parse_xml('aws_vpn_parser.yaml') }}): 'NoneType' object is not subscriptable"}
The playbook:
- hosts: '{{ PALO_HOST | default("localhost") }}'
connection: local
gather_facts: true
collections:
- paloaltonetworks.panos
tasks:
- name: load var
include_vars: provider.yaml
- name: load aws var
include_vars: /etc/ansible/aws/vpn_facts.yaml
- name: load variable dir
include_vars:
dir: /etc/ansible/aws/vars/
- name: aws_vpn connection info
ec2_vpc_vpn_info:
vpn_connection_ids: '{{ vpn_id }}'
region: "{{ region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
register: vpn_conn_facts
- name: set_fact
set_fact:
parsed: "{{ vpn_conn_facts.vpn_connections[0].customer_gateway_configuration
| parse_xml('aws_vpn_parser.yaml') }}"
- debug:
msg: '{{ parsed }}'
output of "vpn_conn_facts.vpn_connections[0].customer_gateway_configuration"
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<vpn_connection id=\"vpn-395e9883880\">\n <customer_gateway_id>cgw-136da58e954a</customer_gateway_id>\n <vpn_gateway_id>vgw-015c18a444e89</vpn_gateway_id>\n <vpn_connection_type>ipsec.1</vpn_connection_type>\n <vpn_connection_attributes>NoBGPVPNConnection</vpn_connection_attributes>\n <ipsec_tunnel>\n <customer_gateway>\n <tunnel_outside_address>\n <ip_address>131.226.22.251</ip_address>\n </tunnel_outside_address>\n <tunnel_inside_address>\n <ip_address>169.254.254.58</ip_address>\n <network_mask>255.255.255.252</network_mask>\n <network_cidr>30</network_cidr>\n </tunnel_inside_address>\n </customer_gateway>\n <vpn_gateway>\n <tunnel_outside_address>\n <ip_address>34.232.238.139</ip_address>\n </tunnel_outside_address>\n <tunnel_inside_address>\n <ip_address>169.254.254.57</ip_address>\n <network_mask>255.255.255.252</network_mask>\n <network_cidr>30</network_cidr>\n </tunnel_inside_address>\n </vpn_gateway>\n <ike>\n <authentication_protocol>sha1</authentication_protocol>\n <encryption_protocol>aes-128-cbc</encryption_protocol>\n <lifetime>28800</lifetime>\n <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n <mode>main</mode>\n <pre_shared_key>JXzQgDDNG944e0nnh4w</pre_shared_key>\n </ike>\n <ipsec>\n <protocol>esp</protocol>\n <authentication_protocol>hmac-sha1-96</authentication_protocol>\n <encryption_protocol>aes-128-cbc</encryption_protocol>\n <lifetime>3600</lifetime>\n <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n <mode>tunnel</mode>\n <clear_df_bit>true</clear_df_bit>\n <fragmentation_before_encryption>true</fragmentation_before_encryption>\n <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n <dead_peer_detection>\n <interval>10</interval>\n <retries>3</retries>\n </dead_peer_detection>\n </ipsec>\n </ipsec_tunnel>\n <ipsec_tunnel>\n <customer_gateway>\n <tunnel_outside_address>\n <ip_address>131.226.22.231</ip_address>\n </tunnel_outside_address>\n <tunnel_inside_address>\n <ip_address>169.254.207.46</ip_address>\n <network_mask>255.255.255.252</network_mask>\n <network_cidr>30</network_cidr>\n </tunnel_inside_address>\n </customer_gateway>\n <vpn_gateway>\n <tunnel_outside_address>\n <ip_address>54.225.7.92</ip_address>\n </tunnel_outside_address>\n <tunnel_inside_address>\n <ip_address>169.254.207.45</ip_address>\n <network_mask>255.255.255.252</network_mask>\n <network_cidr>30</network_cidr>\n </tunnel_inside_address>\n </vpn_gateway>\n <ike>\n <authentication_protocol>sha1</authentication_protocol>\n <encryption_protocol>aes-128-cbc</encryption_protocol>\n <lifetime>28800</lifetime>\n <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n <mode>main</mode>\n <pre_shared_key>RDt7vieaxRkjUwaCJ8M8Lo.Qztdhhfdq</pre_shared_key>\n </ike>\n <ipsec>\n <protocol>esp</protocol>\n <authentication_protocol>hmac-sha1-96</authentication_protocol>\n <encryption_protocol>aes-128-cbc</encryption_protocol>\n <lifetime>3600</lifetime>\n <perfect_forward_secrecy>group2</perfect_forward_secrecy>\n <mode>tunnel</mode>\n <clear_df_bit>true</clear_df_bit>\n <fragmentation_before_encryption>true</fragmentation_before_encryption>\n <tcp_mss_adjustment>1379</tcp_mss_adjustment>\n <dead_peer_detection>\n <interval>10</interval>\n <retries>3</retries>\n </dead_peer_detection>\n </ipsec>\n </ipsec_tunnel>\n</vpn_connection>"

PagerDuty Alert: How to set "Severity" level in Alertmanager

I try to control the severity level of PagerDuty alerts using configuration of Alertmanager.
I hard-coded the severity level to warning in the receiver of Alertmanager:
- name: 'whatever_pd_service'
pagerduty_configs:
- send_resolved: true
service_key: SERVICE_KEY
url: https://events.pagerduty.com/v2/enqueue
client: '{{ template "pagerduty.default.client" . }}'
client_url: '{{ template "pagerduty.default.clientURL" . }}'
severity: 'warning'
description: '{{ (index .Alerts 0).Annotations.summary }}'
details:
firing: '{{ template "pagerduty.default.instances" .Alerts.Firing }}'
information: '{{ range .Alerts }}{{ .Annotations.information }}
{{ end }}'
num_firing: '{{ .Alerts.Firing | len }}'
num_resolved: '{{ .Alerts.Resolved | len }}'
resolved: '{{ template "pagerduty.default.instances" .Alerts.Resolved }}'
but still in the alerts generated, the Severity level was set to critical:
Is there a way to set the Severity level in PagerDuty?
Found out the reason why the severity field in Alertmanager receiver configuration is not working - we are using a Prometheus (Events API v1) integration in the PagerDuty Service, and according to the specification of PD Events API v1 (https://developer.pagerduty.com/docs/ZG9jOjExMDI5NTc4-send-a-v1-event), there is no severity field.
So there are two ways to solve this problem (achieve Dynamic Notification for PagerDuty) - either use Events API v2, or use service orchestration (https://support.pagerduty.com/docs/event-orchestration#service-orchestrations)

I'm trying to change permissions with ansible and it's not working well

I have this playbook that is working fine, but not as I need it and I cannot find the problem.
The playbook should allow me to change permissions to a filesystem recursively or else to a particular file.
- name: Playbook to change file and directory permissions
hosts: '{{ target_hosts }}'
vars:
PATH: '{{ target_path }}'
PERMISSIONS: '{{ number }}'
OWNER: '{{ target_owner }}'
GROUP: '{{ target_group }}'
tasks:
- name: Checking that it is not a system mount point
fail:
msg: "Changing permissions on system fs is not allowed"
when: PATH in ["/etc", "/var", "/tmp", "/usr", "/", "/opt", "/home", "/boot"]
- name: Checking if the path is a file or a filesystem
stat:
path: '{{ PATH }}'
register: path_status
- name: Applying permissions on the filesystem
block:
- name: Report if directory exists
debug:
msg: "Directory {{ PATH }} is present on the server"
when: path_status.stat.exists
- name: Applying permissions recursively
file:
path: '{{ PATH }}'
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
recurse: yes
when: path_status.stat.isdir is defined and path_status.stat.isdir
- name: Applying permissions on the file
block:
- name: Report if file exists
debug:
msg: "File {{ PATH }} is present on the server"
when: path_status.stat.exists
- name: Applying permissions
file:
path: '{{ PATH }}'
state: file
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
when: path_status.stat.isreg is defined and path_status.stat.isreg
The first 2 tasks
Verify that it is not a system filesystem
Using the Ansible stat module, I register the path that is being
passed as a parameter of the PATH variable
When I execute just passing a filesystem like in the following example
ansible-playbook change_fs_permissions.yml -e "target_hosts=centoslabs target_path=/etc number=755 target_owner=root target_group=testing"
the execution ends because it's a system mount point. (What do I expect)
But if I enter something like /tmp/somefile.txt as a parameter of the PATH variable, my idea is that the playbook will fail again since it cannot change anything within that filesystem but it does not continue executing and changes the permissions.
They will see that I use the BLOCK module since it seemed to me the best so that if a filesystem is passed to it, it executes those tasks and if it is a file it executes the others.
Can you give me some ideas on how to approach this problem?
Simplify the conditions
when: path_status.stat.isdir is defined and path_status.stat.isdir
when: path_status.stat.isreg is defined and path_status.stat.isreg
Do not test whether the attribute is defined or not and set the default to false instead. This will skip the situations when PATH is neither a directory nor a regular file.
when: path_status.stat.isdir|default(false)
when: path_status.stat.isreg|default(false)
In this case, you can also omit to test the PATH existence because if it does not exist the block will be skipped anyway
when: path_status.stat.exists
Try this
- name: Applying permissions on the filesystem
block:
- name: Report if directory exists
debug:
msg: "Directory {{ PATH }} is present on the server"
- name: Applying permissions recursively
file:
path: '{{ PATH }}'
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
recurse: true
when: path_status.stat.isdir|default(false)
- name: Applying permissions on the file
block:
- name: Report if file exists
debug:
msg: "File {{ PATH }} is present on the server"
- name: Applying permissions
file:
path: '{{ PATH }}'
state: file
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
when: path_status.stat.isreg|default(false)
I did it and it's working now.
What I did was change this part:
- name: Checking that it is not a system mount point
fail:
msg: "Changing permissions on system fs is not allowed"
when: PATH in ["/etc", "/var", "/tmp", "/usr", "/", "/opt", "/home", "/boot"]
For:
- name: Checking that it is not a system mount point
fail:
msg: "Changing permissions on system fs is not allowed"
when: PATH.split('/')[1] in ["/etc", "/var", "/tmp", "/usr", "/", "/opt", "/home", "/boot"]

How to exclude filesystems with ansible

I'm writing a playbook the change file and folders permissions on a Linux server.
Until know it is working and looks like this:
-
name: Playbook to change file and directory permissions
hosts: all
become: yes
vars:
DIR: '{{ target_dir }}'
FILE: '{{ target_file }}'
PERMISSIONS: '{{ number }}'
OWNER: '{{ target_owner }}'
GROUP: '{{ target_group }}'
tasks:
- name: Checking if the directory exists
stat:
path: '{{ DIR }}'
register: dir_status
- name: Checking if the file exists
stat:
path: '{{ FILE }}'
register: file_status
- name: Report if directory exists
debug:
msg: "Directory {{ DIR }} is present on the server"
when: dir_status.stat.exists and dir_status.stat.isdir
- name: Report if file exists
debug:
msg: "File {{ FILE }} is present on the server"
when: file_status.stat.exists
- name: Applying new permissions
file:
path: '{{ DIR }}/{{ FILE }}'
state: file
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
But what I need is if the user that gonna execute the playbook in rundeck wanna change permissions on the (/boot /var /etc /tmp /usr) directories tell ansible to not try doing that and throw an error message.
How can I do that?
I understand your question that you like to fail with custom message when a variable DIR contains one of the values /boot, /var, /etc, /tmp or /usr.
To do so you may use
- name: You can't work on {{ DIR }}
fail:
msg: The system may not work on {{ DIR }} according ...
when: '"/boot" or "/var" or "/etc" or "/tmp" or "/usr" in DIR'
There is also a meta_module which can end_play when condition are met.
tasks:
- meta: end_play
when: '"/boot" or "/var" or "/etc" or "/tmp" or "/usr" in DIR'
Both, fail and end_play, you can combine with different variables for certain use cases.
when: "'download' or 'unpack' in ansible_run_tags"
when: ( "DMZ" not in group_names )
Thanks to
Run an Ansible task only when the variable contains a specific string
Ansible - Execute task when variable contains specific string
Please take note that you are constructing the full path by concatenating {{ DIR }}/{{ FILE }} at the end. The above mentioned simple approach will not handle an empty DIR and FILEname with path included. Test cases could be
DIR: ""
FILE "/tmp/test"
DIR: "/"
FILE: "tmp/test"
Maybe you like to perform the test on the full filepath or test what a variable begins with.
In respect to the comments from Zeitounator and seshadri-c you may also try the approach of the assert_module
- name: Check for allowed directories
assert:
that:
- DIR in ["/boot", "/etc", "/var", "/tmp", "/usr"]
quiet: true
fail_msg: "The system may not work on {{ DIR }} according ..."
success_msg: "Path is OK."

Ansible playbook loop with with_items

I have to update sudoers.d multiple user files with few lines/commands using ansible playbook
users.yml
user1:
- Line1111
- Line2222
- Line3333
user2:
- Line4444
- Line5555
- Line6666
main.yml
- hosts: "{{ host_group }}"
vars_files:
- ../users.yml
tasks:
- name: Add user "user1" to sudoers.d
lineinfile:
path: /etc/sudoers.d/user1
line: '{{ item }}'
state: present
mode: 0440
create: yes
validate: 'visudo -cf %s'
with_items:
- "{{ user1 }}"
The above one is working only for user1..
If I want to also include user2 --> How to change the file name : path: /etc/sudoers.d/user1
I tried below and its not working :
Passing below users as variable to main.yml while running
users:
- "user1"
- "user2"
- name: Add user "{{users}}" to sudoers.d
lineinfile:
path: /etc/sudoers.d/{{users}}
line: '{{ item }}'
state: present
mode: 0440
create: yes
validate: 'visudo -cf %s'
with_items:
- "{{ users }}"
So, basically I want to pass in users to a variable {{users}} as user1 and user2 and wanted to use the lines for each user from users.yml and add it to respective user files (/etc/sudoers.d/user1 and /etc/sudoers.d/user2).
So /etc/sudoers.d/user1 should look like
Line1111
Line2222
Line3333
and /etc/sudoers.d/user2 should look like
Line4444
Line5555
Line6666
Try to add quotes:
users:
- "user1"
- "user2"
- name: "Add user {{users}} to sudoers.d"
lineinfile:
path: "/etc/sudoers.d/{{users}}"
line: "{{ item }}"
state: present
mode: 0440
create: yes
validate: 'visudo -cf %s'
with_items:
- "{{ users }}"
As per Ansible Documentation on Using Variables:
YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you aren’t trying to start a YAML dictionary. This is covered on the YAML Syntax documentation.
This won’t work:
- hosts: app_servers
vars:
app_path: {{ base_path }}/22
Do it like this and you’ll be fine:
- hosts: app_servers
vars:
app_path: "{{ base_path }}/22"
cat users.yml
---
users:
- user1:
filename: user1sudoers
args:
- Line1111
- Line2222
- Line3333
- user2:
filename: user2sudoers
args:
- Line4444
- Line5555
- Line6666
I use template here, instead of lineinfile
---
cat sudoers.j2
{% if item.args is defined and item.args %}
{% for arg in item.args %}
{{ arg }}
{% endfor %}
{% endif %}
the task content
---
- hosts: localhost
vars_files: ./users.yml
tasks:
- name: sync sudoers.j2 to localhost
template:
src: sudoers.j2
dest: "/tmp/{{ item.filename }}"
loop: "{{ users_list }}"
when: "users_list is defined and users_list"
after run the task.yml, generate two files under /tmp directory.
cat /tmp/user1sudoers
Line1111
Line2222
Line3333
cat /tmp/user2sudoers
Line4444
Line5555
Line6666

Resources