Ansible: Playbook to export systemd status into CSV format - linux

We have more than 1000+ VMs running on different Hyper-V nodes. I want to create a report in CSV format for one systemd service status.
For example, I would like to check the running status of postfix whether it's in state started or stopped. These statuses need to be print into CSV file format.
Expected result as below format

Finally, I got solution for this above request.
Here is the code below, helpful for others
Thanks to the gregsowell blog helped me get these done. https://gregsowell.com/?p=7289
---
- name: Generate an HTML report from jinja template
hosts: postfix-hosts
gather_facts: true
vars:
#email settings
email_subject: System status Report
email_host: stackoverflw.smtp.com
email_from: noreply#stackoverflw.com
email_to: AdMin_Stack#stackoverflw.com
#random settings
csv_path: /tmp
csv_filename: report.csv
headers: Hostname,OS,Distro Ver,Kernel Ver,Postfix Status,FQDN,Total VCPU,Total RAM,Total SWAP,Total Disk,Hyper-V
tasks:
- name: Gather last Postfix Status
ansible.builtin.shell: systemctl status postfix | egrep -i Active | awk '{ print $2,$3}'
register: active
- name: Save CSV headers
ansible.builtin.lineinfile:
dest: "{{ csv_path }}/{{ csv_filename }}"
line: "{{ headers }}"
create: true
state: present
delegate_to: localhost
run_once: true
- name: Build out CSV file
ansible.builtin.lineinfile:
dest: "{{ csv_path }}/{{ csv_filename }}"
line: "{{ inventory_hostname }},{{ ansible_distribution }},{{ ansible_distribution_version }},{{ ansible_kernel }},{{ active.stdout }},{{ ansible_fqdn }},{{ ansible_processor_vcpus }},{{ ansible_memtotal_mb }},{{ ansible_swaptotal_mb }},{{ ansible_devices.vda.partitions.vda1.size }},{{ ansible_product_name }}"
create: true
state: present
delegate_to: localhost
- name: Read in CSV to variable
community.general.read_csv:
path: "{{ csv_path }}/{{ csv_filename }}"
register: csv_file
delegate_to: localhost
run_once: true
# - name: debug csv_file
# debug:
# var: csv_file
# run_once: true
- name: Send Email
community.general.mail:
host: "{{ email_host }}"
from: "{{ email_from }}"
port: 25
to: "{{ email_to }}"
subject: "[Ansible] {{ email_subject }}"
body: "{{ lookup('template', 'report.html.j2') }}"
attach: "{{ csv_path }}/{{ csv_filename }}"
subtype: html
delegate_to: localhost
run_once: true
report.html.j2
<table style="border: 1px solid black; border-collapse: collapse;">
<tr>
{% for header in headers.split(",") %}
<th style="border: 1px solid black; padding: 8px 16px;">{{ header }}</th>
{% endfor %}
</tr>
{% for host in csv_file.list %}
<tr>
{% for header in headers.split(",") %}
<td style="border: 1px solid black; padding: 8px 16px;">{{ host[header] }}</td>
{% endfor %}
</tr>
{% endfor %}
</table>

Related

ansible CSV header options when appending with for loop

i'm trying to gather network devices data using ansible and add to csv file which is working fine for me but then now i would like to have the headers to the match to whats been gathered .
- name: Playbook to collect ntp,snmp_facts and put into csv file
hosts: all
connection: network_cli
gather_facts: true
# check_mode: yes
vars:
output_path: "./reports/"
filename: "device_report_{{ date }}.csv"
vendor: CISCO
tasks:
- name: CSV - Generate output filename
set_fact: date="{{lookup('pipe','date +%Y%m%d')}}"
run_once: true
- name: CSV - Create file and set the header
lineinfile:
dest: "{{ output_path }}/{{ filename }}"
**line: hostname,ip_address,image,iostype,model,serialnum,system,version,ntp_server_1,ntp_server_2,vrf,snmp_server_1,snmp_server_2,snmp_server_3,snmp_server_4,snmp_server_5,snmp_server_6**
create: true
state: present
- import_tasks: /path/playbooks/facts/ntp_facts/ntp_facts_get.yml
# - import_tasks: /path/playbooks/facts/snmp_facts/snmp_facts_get_2960.yml
# - import_tasks: /path/playbooks/facts/snmp_facts/snmp_facts_get_not_2960.yml
- import_tasks: /path/playbooks/facts/snmp_facts/snmp_another_test.yml
- import_tasks: /path/playbooks/facts/dns_facts/dns_facts_domain_name_get.yml
- name: CSV - Getting all the data just before printing to csv
set_fact:
csv_tmp: >
{{ ansible_net_hostname|default('N/A') }},
{{ ansible_host|default('N/A') }},
{{ ansible_net_image|default('N/A') }},
{{ ansible_net_iostype|default('N/A') }},
{{ ansible_net_model|default('N/A') }},
{{ ansible_net_serialnum|default('N/A') }},
{{ ansible_net_system|default('N/A') }},
{{ ansible_net_version|default('N/A') }},
{{ ntp_servers.gathered.servers[0].server|default('N/A') }},
{{ ntp_servers.gathered.servers[1].server|default('N/A') }},
{{ ntp_servers.gathered.servers[0].vrf|default('N/A') }},
{% set snmp_list = [] %}
{% for snmp_host in snmp_hosts %}
{% set snmp_list = snmp_list.append(snmp_host.host ~ ',' ~ snmp_host.version) %}
{% endfor %}
{{ snmp_list|join(',') }},
{{ domain_name[0]|default('N/A') }},
- name: check whats up with this csv_tmp
debug:
var: csv_tmp
- name: CSV - Write information into .csv file
lineinfile:
insertafter: EOF
dest: "{{ output_path }}/{{ filename }}"
line: "{{ csv_tmp }}"
- name: CSV - Blank lines removal
lineinfile:
path: "./{{ output_path }}/{{ filename }}"
state: absent
regex: '^\s*$'
when appending for each device using csv_tmp i have the for loop for snmp
{% for snmp_host in snmp_hosts %}
{% set snmp_list = snmp_list.append(snmp_host.host ~ ',' ~ snmp_host.version) %}
{% endfor %}
{{ snmp_list|join(',') }},
So i would not know how many snmp hosts are configured so i was thinking if there are some better ways to have this achieved or somehow have a dynamic option to generate header or have an option to make sure that each of value {{ ansible_net_hostname|default('N/A') }}, into certain specified values first and then remove any empty column .
I have a bit of time constraint so reaching out here for help .
Given the inventory for testing
shell> cat hosts
test_11 ansible_host=10.1.0.61
test_13 ansible_host=10.1.0.63
Create a dictionary first. Declare the below variables in vars. For example, in group_vars
shell> cat group_vars/all/csv_content.yml
csv_content_dict_str: |
hostname: {{ ansible_net_hostname|d('N/A') }}
ip_address: {{ ansible_host|d('N/A') }}
image: {{ ansible_net_image|d('N/A') }}
iostype: {{ ansible_net_iostype|d('N/A') }}
model: {{ ansible_net_model|d('N/A') }}
serialnum: {{ ansible_net_serialnum|d('N/A') }}
system: {{ ansible_net_system|d('N/A') }}
version: {{ ansible_net_version|d('N/A') }}
ntp_server_1: {{ ntp_servers.gathered.servers[0].server|d('N/A') }}
ntp_server_2: {{ ntp_servers.gathered.servers[1].server|d('N/A') }}
vrf: {{ ntp_servers.gathered.servers[0].vrf|d('N/A') }}
{% for snmp_host in snmp_hosts|d([]) %}
snmp_server_{{ loop.index }}: {{ snmp_host.host }}
snmp_server_version_{{ loop.index }}: {{ snmp_host.version }}
{% endfor %}
domain_name: {{ domain_name[0]|d('N/A') }}
csv_content_dict: "{{ csv_content_dict_str|from_yaml }}"
csv_content: |
{{ csv_content_dict.keys()|join(',') }}
{{ csv_content_dict.values()|join(',') }}
Without any facts collected, this gives
ok: [test_11] =>
csv_content_dict:
domain_name: N/A
hostname: N/A
image: N/A
iostype: N/A
ip_address: 10.1.0.61
model: N/A
ntp_server_1: N/A
ntp_server_2: N/A
serialnum: N/A
system: N/A
version: N/A
vrf: N/A
ok: [test_13] =>
csv_content_dict:
domain_name: N/A
hostname: N/A
image: N/A
iostype: N/A
ip_address: 10.1.0.63
model: N/A
ntp_server_1: N/A
ntp_server_2: N/A
serialnum: N/A
system: N/A
version: N/A
vrf: N/A
Write the files
- copy:
dest: "{{ output_path }}/{{ filename }}"
content: "{{ csv_content }}"
gives
shell> ssh admin#test_11 cat /tmp/reports/device_report_2023-01-22.csv
hostname,ip_address,image,iostype,model,serialnum,system,version,ntp_server_1,ntp_server_2,vrf,domain_name
N/A,10.1.0.61,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
shell> ssh admin#test_13 cat /tmp/reports/device_report_2023-01-22.csv
hostname,ip_address,image,iostype,model,serialnum,system,version,ntp_server_1,ntp_server_2,vrf,domain_name
N/A,10.1.0.63,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A
Example of a complete playbook to write the report
shell> cat write_report.yml
- hosts: all
vars:
output_path: /tmp/reports
filename: "device_report_{{ date }}.csv"
tasks:
- set_fact:
date: "{{ '%Y-%m-%d'|strftime }}"
run_once: true
- file:
state: directory
path: "{{ output_path }}"
- block:
- debug:
var: csv_content_dict
- debug:
msg: |
{{ csv_content }}
when: debug|d(false)|bool
- copy:
dest: "{{ output_path }}/{{ filename }}"
content: "{{ csv_content }}"
Example of a complete playbook to read the report
shell> cat read_report.yml
- hosts: all
vars:
output_path: /tmp/reports
filename: "device_report_{{ date }}.csv"
tasks:
- set_fact:
date: "{{ '%Y-%m-%d'|strftime }}"
run_once: true
- community.general.read_csv:
path: "{{ output_path }}/{{ filename }}"
register: report
- debug:
var: report.list
Create, or collect facts. For example, create host_vars for testing
shell> cat host_vars/test_11/test_network_facts.yml
snmp_hosts:
- host: snmp1.example.com
version: SNMPv3
- host: snmp2.example.com
version: SNMPv3
Write the report to test_11
shell> ansible-playbook write_report.yml -l test_11
PLAY [all] ***********************************************************************************
TASK [Gathering Facts] ***********************************************************************
ok: [test_11]
TASK [set_fact] ******************************************************************************
ok: [test_11]
TASK [file] **********************************************************************************
ok: [test_11]
TASK [debug] *********************************************************************************
skipping: [test_11]
TASK [debug] *********************************************************************************
skipping: [test_11]
TASK [copy] **********************************************************************************
changed: [test_11]
PLAY RECAP ***********************************************************************************
test_11: ok=4 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
Take a look at the file
shell> ssh admin#test_11 cat /tmp/reports/device_report_2023-01-22.csv
hostname,ip_address,image,iostype,model,serialnum,system,version,ntp_server_1,ntp_server_2,vrf,snmp_server_1,snmp_server_version_1,snmp_server_2,snmp_server_version_2,domain_name
N/A,10.1.0.61,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,snmp1.example.com,SNMPv3,snmp2.example.com,SNMPv3,N/A
Read the file from test_11
shell> ansible-playbook read_report.yml -l test_11
PLAY [all] ***********************************************************************************
TASK [Gathering Facts] ***********************************************************************
ok: [test_11]
TASK [set_fact] ******************************************************************************
ok: [test_11]
TASK [community.general.read_csv] ************************************************************
ok: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
report.list:
- domain_name: N/A
hostname: N/A
image: N/A
iostype: N/A
ip_address: 10.1.0.61
model: N/A
ntp_server_1: N/A
ntp_server_2: N/A
serialnum: N/A
snmp_server_1: snmp1.example.com
snmp_server_2: snmp2.example.com
snmp_server_version_1: SNMPv3
snmp_server_version_2: SNMPv3
system: N/A
version: N/A
vrf: N/A
PLAY RECAP ***********************************************************************************
test_11: ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Q: "It's overwriting every time since some hosts have 5 SNMP configured and some only 2."
A: This can't happen if you write reports into separate files at the remote hosts. Add hours-minutes-seconds %H-%M-%S to the name of the CSV file
- set_fact:
date: "{{ '%Y-%m-%d-%H-%M-%S'|strftime }}"
run_once: true
Then, a new file will be created each time you run the playbook. For example,
shell> ansible-playbook write_report.yml
will create two files
shell> ssh admin#test_11 ls -la /tmp/reports
total 34
drwxr-xr-x 2 root wheel 4 Jan 23 03:30 .
drwxrwxrwt 13 root wheel 27 Jan 23 03:30 ..
-rw-r--r-- 1 root wheel 283 Jan 22 08:57 device_report_2023-01-22.csv
-rw-r--r-- 1 root wheel 283 Jan 23 03:30 device_report_2023-01-23-04-30-00.csv
shell> ssh admin#test_13 ls -la /tmp/reports
total 34
drwxr-xr-x 2 root wheel 4 Jan 23 03:30 .
drwxrwxrwt 10 root wheel 17 Jan 23 03:30 ..
-rw-r--r-- 1 root wheel 161 Jan 22 08:30 device_report_2023-01-22.csv
-rw-r--r-- 1 root wheel 161 Jan 23 03:30 device_report_2023-01-23-04-30-00.csv
The next option is to write all files to the controller (localhost). For example, create the directories
- file:
state: directory
path: "{{ output_path }}/{{ inventory_hostname }}"
delegate_to: localhost
and write the files
- copy:
dest: "{{ output_path }}/{{ inventory_hostname }}/{{ filename }}"
content: "{{ csv_content }}"
delegate_to: localhost
Then, each time you run the playbook new files will be created at the controller
shell> tree /tmp/reports/
/tmp/reports/
├── test_11
│   └── device_report_2023-01-23-04-49-27.csv
└── test_13
└── device_report_2023-01-23-04-49-27.csv
2 directories, 2 files
You can easily read the reports from the files at the controller. For example, give the CSV files
shell> tree /tmp/reports/
/tmp/reports/
├── test_11
│   ├── device_report_2023-01-23-04-49-27.csv
│   └── device_report_2023-01-23-05-32-40.csv
└── test_13
├── device_report_2023-01-23-04-49-27.csv
└── device_report_2023-01-23-05-32-40.csv
2 directories, 4 files
Read the files
- community.general.read_csv:
path: "{{ item.src }}"
register: out
with_community.general.filetree: "{{ output_path }}"
when: item.state == 'file'
loop_control:
label: "{{ item.path }}"
Declare the below variables
output_path: "/tmp/reports"
reports_str: |
{% for result in out.results|selectattr('list', 'defined') %}
{% set _keys = result.item.path|split('/') %}
{% set host = _keys|first %}
{% set report = _keys|last|regex_replace('^device_report_(.*)\.csv', '\\1') %}
- {{ host }}:
{{ report }}: {{ result.list }}
{% endfor %}
reports: "{{ reports_str|from_yaml|combine(recursive=true) }}"
reports_lists: "{{ dict(reports|dict2items|json_query('[].[key, value.keys(#)]')) }}"
give
reports:
test_11:
2023-01-23-04-49-27:
- domain_name: N/A
hostname: N/A
image: N/A
iostype: N/A
ip_address: 10.1.0.61
model: N/A
ntp_server_1: N/A
ntp_server_2: N/A
serialnum: N/A
snmp_server_1: snmp1.example.com
snmp_server_2: snmp2.example.com
snmp_server_version_1: SNMPv3
snmp_server_version_2: SNMPv3
system: N/A
version: N/A
vrf: N/A
2023-01-23-05-32-40:
- domain_name: N/A
hostname: N/A
image: N/A
iostype: N/A
ip_address: 10.1.0.61
model: N/A
ntp_server_1: N/A
ntp_server_2: N/A
serialnum: N/A
snmp_server_1: snmp1.example.com
snmp_server_2: snmp2.example.com
snmp_server_version_1: SNMPv3
snmp_server_version_2: SNMPv3
system: N/A
version: N/A
vrf: N/A
test_13:
2023-01-23-04-49-27:
- domain_name: N/A
hostname: N/A
image: N/A
iostype: N/A
ip_address: 10.1.0.63
model: N/A
ntp_server_1: N/A
ntp_server_2: N/A
serialnum: N/A
system: N/A
version: N/A
vrf: N/A
2023-01-23-05-32-40:
- domain_name: N/A
hostname: N/A
image: N/A
iostype: N/A
ip_address: 10.1.0.63
model: N/A
ntp_server_1: N/A
ntp_server_2: N/A
serialnum: N/A
system: N/A
version: N/A
vrf: N/A
reports_lists:
test_11:
- 2023-01-23-05-32-40
- 2023-01-23-04-49-27
test_13:
- 2023-01-23-05-32-40
- 2023-01-23-04-49-27
Example of a complete playbook to read the reports at the controller
shell> cat read_report.yml
- hosts: localhost
vars:
output_path: /tmp/reports
reports_str: |
{% for result in out.results|selectattr('list', 'defined') %}
{% set _keys = result.item.path|split('/') %}
{% set host = _keys|first %}
{% set report = _keys|last|regex_replace('^device_report_(.*)\.csv', '\\1') %}
- {{ host }}:
{{ report }}: {{ result.list }}
{% endfor %}
reports: "{{ reports_str|from_yaml|combine(recursive=true) }}"
reports_lists: "{{ dict(reports|dict2items|json_query('[].[key, value.keys(#)]')) }}"
tasks:
- debug:
msg: "{{ item.src }}"
with_community.general.filetree: "{{ output_path }}"
when:
- debug|d(false)|bool
- item.state == 'file'
loop_control:
label: "{{ item.path }}"
- community.general.read_csv:
path: "{{ item.src }}"
register: out
with_community.general.filetree: "{{ output_path }}"
when: item.state == 'file'
loop_control:
label: "{{ item.path }}"
- debug:
var: reports
- debug:
var: reports_lists

Ansible jinja dictionary create multiple sudo files for each user

I'm trying to create a sudo file for each user.
Playbook:
- name:
hosts: all
gather_facts: false
tasks:
- name:
template:
src: sudo.j2
dest: "/etc/sudoers.d/{{item.name}}"
loop: "{{userinfo}}"
when: "'admins' in item.groupname"
Var file:
userinfo:
- groupname: admins
name: bill
- groupname: admins
name: bob
- groupname: devs
name: bea
Jinja file:
{% for item in userinfo %}
{% if item.groupname=="admins" %}
{{item.name}} ALL=ALL NOPASSWD:ALL
{% endif %}
{% endfor %}
What I am getting is two files but with information of both users.
bill ALL=ALL NOPASSWD:ALL
bob ALL=ALL NOPASSWD:ALL
How do I make it work such that each file contains information of that user only
The issue is that you have 2 loops: one in the playbook, the other in the template jinja file; try leaving the template file with the templated information only
{{ item.name }} ALL=ALL NOPASSWD:ALL

FAILED! => {"msg": "'item' is undefined"}

Trying to create a partition and mountpoint on the Azure Disks, which are getting attached to VM on creation as part of terraform. Disks should be created based on user input through Jenkins.
Each disk was passed with a LUN number and I am fetching the device name(sdc,sdd etc.) for each disk using that LUN number and grep. In my_tasks.yml this tasks to be looped with include_tasks in playbook.yml as below:
my_tasks.yml
---
- parted:
device: "{{ volumename.stdout }}"
number: 1
state: present
- filesystem:
fstype: xfs
dev: "{{ volumename.stdout }}"
- mount:
fstype: xfs
opts: noatime
src: "{{ volumename.stdout }}"
path: "{{ item.mountpoint }}"
state: mounted
- command: blkid -s UUID -o value {{ volumename.stdout }}
register: volumename_disk
- blockinfile:
path: /etc/fstab
state: present
block: |
UUID={{ volumename_disk.stdout }} {{ volumename.stdout }} xfs defaults,noatime,nofail 0 0
playbook.yml
---
- hosts: "{{ host }}"
become: true
become_method: sudo
become_user: root
vars:
mount: "{{ lookup('file', '/home/xyz/vars.txt') }}"
tasks:
- name: Generate the Lun_Name
shell: "tree /dev/disk/azure/scsi1 | grep -i lun | awk '{print $2}'"
register: lun
- set_fact:
lun_name: "{{ lun_name|default([]) + [ { 'name': lun.stdout } ] }}"
- debug:
msg: "LUN is: {{ lun_name }}"
- name: Generate the Volume_Name
shell: echo "$(ls -l /dev/disk/azure/scsi1 |grep lun |egrep -o "([^\/]+$)")"
register: volumename
- set_fact:
volumenames: "{{ volumenames|default([]) + [ { 'name': volumename.stdout } ] }}"
- debug:
msg: "VOLUMENAME is: {{ volumenames }}"
# - debug:
# msg: "the mountpoints are {{ mount }}"
- set_fact:
mountpoint: "{{ lookup('file', '/home/xyz/vars.txt').split(',') }}"
- debug:
msg: "the mountpoints are {{ mountpoint }}"
# loop: "{{ mountpoint }}"
- include_tasks: my_tasks.yml
loop: "{{ item.volumenames | list }} {{ item.mountpoint | list }}"
loop_control:
loop_var: "{{ item }}"
fatal: [10.102.26.74]: FAILED! => {"msg": "'item' is undefined"}
Issue seems to be with loop inside a include_tasks, I'm able to get the loop working for mountpoint which is running after set_fact in playbook.yml
How can I resolve this issue or improve the code?

Insert lines in a file from an other file lines in Ansible

I would copy lines from a file /tmp/test1 to a file /tmp/test2
the /tmp/test1 contains:
argument1
argument2
#test1
#test2
#test3
the /tmp/test2 contains:
argument1.1
argument2192
#example
#test2
#example1
So my main goal is to insert every line that doesn't exist in /tmp/test2 from file /tmp/test1
and that the line added must be added at the end of the last line which is containing the same begin of line: ^[[:alpha:]] and ^#, so /tmp/test2 should look like this:
argument1.1
argument2192
argument1
argument2
#example
#test2
#example1
#test1
#test3
I created this playbook but it doesn't do what i am looking for:
- name: check test1 content
command: cat /tmp/test1
register: tmp_content
- name: insert line
lineinfile:
path: /tmp/test2
line: '{{item}}'
insertafter: "^#*"
loop: "{{tmp_content.stdout_lines}}"
1) "Insert every line that doesn't exist in /tmp/test2 from file /tmp/test1"
2) "The line added must be added at the end of the last line which is containing the same beginning."
A: The task below does the job. If the first character is # the line is inserted at the end of the file. Otherwise, the line is inserted before the first line starting with #. The parameters insertafter and insertbefore may not be used together. The negative options omit make the parameters insertafter and insertbefore mutually exclusive
- name: insert line
lineinfile:
path: /tmp/test2
line: "{{ item }}"
insertafter: "{{ (item.0 == '#')|ternary('EOF', omit) }}"
insertbefore: "{{ (item.0 != '#')|ternary('^#.*$', omit) }}"
firstmatch: true
loop: "{{ tmp_content.stdout_lines }}"
Notes
Example of a complete playbook for testing
shell> cat pb.yml
- hosts: localhost
tasks:
- name: check test1 content
command: cat /tmp/test1
register: tmp_content
changed_when: false
- name: insert line
lineinfile:
path: /tmp/test2
line: "{{ item }}"
insertafter: "{{ (item.0 == '#')|ternary('EOF', omit) }}"
insertbefore: "{{ (item.0 != '#')|ternary('^#.*$', omit)}}"
firstmatch: true
loop: "{{ tmp_content.stdout_lines }}"
The playbook is idempotent. See the output of the diff_mode below
shell> ansible-playbook pb.yml --diff
...
TASK [insert line] *************************************************
--- before: /tmp/test2 (content)
+++ after: /tmp/test2 (content)
## -1,5 +1,6 ##
argument1.1
argument2192
+argument1
#example
#test2
#example1
changed: [localhost] => (item=argument1)
--- before: /tmp/test2 (content)
+++ after: /tmp/test2 (content)
## -1,6 +1,7 ##
argument1.1
argument2192
argument1
+argument2
#example
#test2
#example1
changed: [localhost] => (item=argument2)
--- before: /tmp/test2 (content)
+++ after: /tmp/test2 (content)
## -5,3 +5,4 ##
#example
#test2
#example1
+#test1
changed: [localhost] => (item=#test1)
ok: [localhost] => (item=#test2)
--- before: /tmp/test2 (content)
+++ after: /tmp/test2 (content)
## -6,3 +6,4 ##
#test2
#example1
#test1
+#test3
changed: [localhost] => (item=#test3)
Brute force option
Read the files
- command: cat /tmp/test1
register: test1
- command: cat /tmp/test2
register: test2
Declare the variables
l1_alpha: "{{ test1.stdout_lines|select('match', '^[^#].*$') }}"
l1_glyph: "{{ test1.stdout_lines|select('match', '^#.*$') }}"
l2_alpha: "{{ test2.stdout_lines|select('match', '^[^#].*$') }}"
l2_glyph: "{{ test2.stdout_lines|select('match', '^#.*$') }}"
l1_alpha_diff: "{{ l1_alpha|difference(l2_alpha) }}"
l1_glyph_diff: "{{ l1_glyph|difference(l2_glyph) }}"
result: "{{ l2_alpha + l1_alpha_diff + l2_glyph + l1_glyph_diff }}"
This gives the expected result
- debug:
msg: |
{% for line in result %}
{{ line }}
{% endfor %}
msg: |-
argument1.1
argument2192
argument1
argument2
#example
#test2
#example1
#test1
#test3
Write it to a file
- copy:
dest: /tmp/test2
content: |
{% for line in result %}
{{ line }}
{% endfor %}
gives
shell> cat /tmp/test2
argument1.1
argument2192
argument1
argument2
#example
#test2
#example1
#test1
#test3
Example of a complete playbook for testing
shell> cat pb.yml
- hosts: localhost
vars:
l1_alpha: "{{ test1.stdout_lines|select('match', '^[^#].*$') }}"
l1_glyph: "{{ test1.stdout_lines|select('match', '^#.*$') }}"
l2_alpha: "{{ test2.stdout_lines|select('match', '^[^#].*$') }}"
l2_glyph: "{{ test2.stdout_lines|select('match', '^#.*$') }}"
l1_alpha_diff: "{{ l1_alpha|difference(l2_alpha) }}"
l1_glyph_diff: "{{ l1_glyph|difference(l2_glyph) }}"
result: "{{ l2_alpha + l1_alpha_diff + l2_glyph + l1_glyph_diff }}"
tasks:
- command: cat /tmp/test1
register: test1
changed_when: false
- command: cat /tmp/test2
register: test2
changed_when: false
- copy:
dest: /tmp/test2
content: |
{% for line in result %}
{{ line }}
{% endfor %}
The playbook is idempotent.

Multiple with_items in an Ansible module block

I want to create multiple logical volumes with a variable file but it return a sintax error found character that cannot start any token, I have tried in different ways but still doesn't work
main.yml
---
- name: playbook for create volume groups
hosts: localhost
become: true
tasks:
- include_vars: vars.yml
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item.var1 }}"
size: "{{ item.var2 }}"
with_items:
- { var1: "{{ var_lv_name }}", var2: "{{ var_lv_size }}" }
vars.yml
var_lv_name:
- lv05
- lv06
var_lv_size:
- 1g
- 1g
Use with_together. Test it first. For example,
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
with_together:
- "{{ var_lv_name }}"
- "{{ var_lv_size }}"
gives (abridged)
msg: 'Create lv: lv05 size: 1g'
msg: 'Create lv: lv06 size: 1g'
Optionally, put the declaration below into the file vars.yml
var_lv: "{{ var_lv_name|zip(var_lv_size) }}"
This creates the list
var_lv:
- [lv05, 1g]
- [lv06, 1g]
Use it in the code. The simplified task below gives the same results
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
loop: "{{ var_lv }}"
The previous answer it's totally correct but In my humble opinion we should be getting into the new way to do the things with loop and filters.
Here's my answer:
---
- name: playbook for create volume groups
hosts: localhost
gather_facts: no
become: true
vars_files: vars.yml
tasks:
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item[0] }}"
size: "{{ item[1] }}"
loop: "{{ var_lv_name | zip(var_lv_size) | list }}"
In this answer you're using the new way to use loops with keyword loop and using filters like zip and turning the result into a list type for iteration in the loop.

Resources