Ansible turn files item into strings - python-3.x

Im trying to find a clean way of converting a files list into a path(string) list.
So far i came up with this:
- name: Get apt source files
find:
paths: /etc/apt/sources.list.d
use_regex: yes
patterns: '^.*\.list$'
register: source_files
- name: loop trough source files
when:
- item != SOME_VAR
- DO_CLEAN_UP
lineinfile:
path: "{{ item }}"
regexp: "^deb {{ REPO_CLEAN_URL }}" # set in vars/main.yml
state: absent
with_items:
- /etc/apt/sources.list
- "{{ source_files.files | items2dict(key_name='path', value_name='path') | list }}"
I would like to improve the "with_items" part please.

There is no such thing as file-object in Ansible playbooks, you only work with basic data types supported by YAML/JSON: string, list, mapping (dictionary), integer and boolean.
When in doubt make use of debug module to display variables and how jinja2 constructs are evaluated, works well even with loops.

Related

Ansible copy file to similar path locations

Currently I am trying to replace all Tomcat keystore files in a particular location across multiple nodes. The problem is, the directory structures are similar, but not exactly the same.
For example, our tomcat directory structure looks like this:
/home/tomcat121test/jdk-11.0.7+10.
But across the different nodes, the paths are slightly different. The differences are the tomcat folder name and the jdk folder name.
The structure is /home/tomcat<version_no><test_or_prod>/jdk-<jdk_version> All in one word for each folder names.
e.g. /home/tomcat11test/jdk-11.0.7+10
So, the idea is to use cp as shown in the task named Backup the current keystore: cp -p /home/tomcat*/jdk*/keystore /home/tomcat*/jdk*/keystore_old_2021
My play book currently looks like this:
---
- name: Update Tomcat Test Servers Keystore
hosts: tomcattest_servers
gather_facts: False
tasks:
- name: ls the jdk dir
shell: ls -lah /home/tomcat*/jdk*/bin/
register: ls_command_output
- debug:
var: ls_command_output.stdout_lines
- name: Backup the current keystore
shell: >
cp -p /home/tomcat*/jdk*/keystore /home/tomcat*/jdk*/keystore_old_2021
- name: Verify copy took place
shell: ls -lah /home/tomcat*/jdk*/bin
register: ls_command_output
- debug:
var: ls_command_output.stdout_lines
Task names Backup the current keystore
is where it seems to be failing.
TASK [Backup the current keystore]
******************************************************************************************************************* fatal: [tomcattest1]: FAILED! => {"changed": true, "cmd": "cp -p
/home/tomcat*/jdk*/keystore /home/tomcat*/jdk*/keystore_old_2021\n",
"delta": "0:00:00.005322", "end": "2022-03-13 18:57:06.091283", "msg":
"non-zero return code", "rc": 1, "start": "2022-03-13
18:57:06.085961", "stderr": "cp: cannot stat
‘/home/tomcat*/jdk*/keystore’: No such file or directory",
"stderr_lines": ["cp: cannot stat ‘/home/tomcat*/jdk*/keystore’: No
such file or directory"], "stdout": "", "stdout_lines": []}
The task names ls the jdk dir works fine and they're both using the shell module, which, in my understanding, is needed if a wildcard needs to be used, instead of the command module.
Here is how I would rephrase, then approach your requirement.
Problem statement:
In /home, we have an unknown folder that match a pattern tomcat.* to find.
In the folder found here above, we have an unknown folder that match a pattern jdk.* to find.
In the folder found here above, I want to ship a new file and backup the state of the existing file prior to copying.
Applying the DRY (Do Not Repeat Yourself) principle:
Clearly the first and second point of our problem statement seem to be the same, so it would be nice if we could have some sort of mechanism that could answer the requirement: "For a given path, return me a unique folder matching a pattern".
Solution:
Ansible have multiple ways of helping you create sets of tasks that you can reuse. Here is a, not exhaustive, list of two of them:
roles: a quite extensive way to reuse multiple Ansible artefacts, including, but not limited to tasks, variables, handlers, files, etc.
the include_tasks module that allows you to load an arbitrary YAML containing a list of tasks
Because role is a quite extensive mechanism, it requires the creation of a set of folders that would be unrelated to this solution, so I am going to demonstrate this using the include_tasks module, but depending on your needs and reusability considerations, creating a role might be a better bet.
So, here is what would be the YAML that we would use in the include_tasks:
a find task based on a given folder
an extraction of the folder matching the given pattern out of the result of the find task, using the selectattr filter and the match test.
an assertion that we have a unique folder matching our pattern
This gives us a file, called here find_exactly_one_folder.yml:
- find:
path: "{{ root_folder }}"
file_type: directory
register: find_exactly_one_folder
- set_fact:
found_folder: >
{{
find_exactly_one_folder.files
| selectattr('path', 'match', root_folder)
| map(attribute='path')
}}
- assert:
that:
- found_folder | length == 1
fail_msg: >-
Did not found exactly one folder, result: `{{ found_folder }}`.
success_msg: >-
{{ found_folder.0 | default('') }} found
Now that we have that "For a given path, return me a unique folder matching a pattern" mechanism, we can have a playbook doing:
Find a unique folder matching the pattern tomcat.* from /home
Find a unique folder matching the pattern jdk.* from the folder resulting from the previous task
Copy the new file in the found folder, using the existing backup mechanism
This result in this set of tasks:
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: /home
folder_match: 'tomcat.*'
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: "{{ found_folder.0 }}"
folder_match: 'jdk.*'
- copy:
src: keystore
dest: "{{ found_folder.0 }}/keystore"
backup: true
Here is for an example playbook, that ends with an extra find and debug task to demonstrate the resulting backup file is being created.
- hosts: node1
gather_facts: no
tasks:
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: /home
folder_match: 'tomcat.*'
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: "{{ found_folder.0 }}"
folder_match: 'jdk.*'
- copy:
src: keystore
dest: "{{ found_folder.0 }}/keystore"
backup: true
- find:
path: "{{ found_folder.0 }}"
pattern: "keystore*"
register: keystores
- debug:
var: keystores.files | map(attribute='path')
This would yield:
PLAY [node1] **************************************************************
TASK [include_tasks] ******************************************************
included: /usr/local/ansible/find_exactly_one_folder.yml for node1
TASK [find] ***************************************************************
ok: [node1]
TASK [set_fact] ***********************************************************
ok: [node1]
TASK [assert] *************************************************************
ok: [node1] => changed=false
msg: |-
/home/tomcat11test found
TASK [include_tasks] ******************************************************
included: /usr/local/ansible/find_exactly_one_folder.yml for node1
TASK [find] ***************************************************************
ok: [node1]
TASK [set_fact] ***********************************************************
ok: [node1]
TASK [assert] *************************************************************
ok: [node1] => changed=false
msg: |-
/home/tomcat11test/jdk-11.0.7+10 found
TASK [copy] **************************************************************
changed: [node1]
TASK [find] **************************************************************
ok: [node1]
TASK [debug] *************************************************************
ok: [node1] =>
keystores.files | map(attribute='path'):
- /home/tomcat11test/jdk-11.0.7+10/keystore.690.2022-03-13#22:11:08~
- /home/tomcat11test/jdk-11.0.7+10/keystore
After reviewing and reading what #β.εηοιτ.βε mentioned about using find, I went back and tried some other things before implementing what was mentioned. ( just needed something quick and fast )
- name: Find the tomcat*/jdk*/bin/keystore, make copy of
shell: find /home/ -iname keystore -exec cp -p "{}" "{}_old_2021" \;
- name: Check for copied keystore
shell: ls -lah /home/tomcat*/jdk*/bin/keystore*
register: ls_command_output
- debug:
var: ls_command_output.stdout_lines
This did exactly what I needed.
UPDATE:
While the above command worked, when it came time to use the copy module to copy the keystore that was going to be replaced ... the issue came up here:
- name: Copy New Keystore to Tomcat TEST servers
copy:
src: /opt/ansible/playbooks/ssl-renew/keystore
dest: /home/tomcat*/jdk*/bin/
Note the destination path. This did not work. I had to specify the specific path.
So I will be looking more into #β.εηοιτ.βε solution above.
I also wanted to say thank you for the detailed and descriptive response to my initial post.
I also looked into the fileglob module, but the first note on the ansible documentation page for fileglob, states:
"Patterns are only supported on files, not directory/paths."

Ansible lookup for particular key

I have to check the target vm's /etc/hosts file. If any ips which starts with 10...* Are there in that file.it should report yes and show the ips and if there is no ips .it should report No under that target hostname . All this information should come to build artifacts in azure pipelines.. please suggest me that possibilities
Using the file lookup was actually a pretty good start. But as all lookups, it only runs on the controller machine (localhost). If you need to run this on a remote target vm, you will have to read the file from there. The idea I followed below is:
Use the slurp module to get /etc/hosts content from the target in a variable on the controller
split the content of the file on the new line character to get a list of lines.
loop on those lines and add the matching ips to a list. The ips are searched using a regexp with the match test and the regex_search filter
show the content of the resulting list if that list is not empty.
The example playbook:
---
- name: Check ips starting with 10. in /etc/hosts
hosts: localhost
gather_facts: false
tasks:
- name: Slurp /etc/hosts content from target vm
slurp:
src: /etc/hosts
register: my_host_entries_slurped
- name: Read /etc/hosts file in a list line by line
set_fact:
my_host_entries: "{{ (my_host_entries_slurped.content | b64decode).split('\n') }}"
- name: Add matching ips to a list
vars:
ip_regex: "^10(\\.\\d{1,3}){3}"
set_fact:
matching_ips: "{{ matching_ips | default([]) + [item | regex_search(ip_regex)] }}"
when: item is match(ip_regex)
loop: "{{ my_host_entries }}"
- name: Show list of matching ips
debug:
var: matching_ips
when: matching_ips | default([]) | length > 0
You can adapt to match your exact needs.
Note: if you are not totally familiar with regexp, the one I used which is (without the escaped \\ in the yaml string)
^10(\.\d{1,3}){3}
means:
search for 10 at the beginning of the line folowed by a group of chars starting with a . and followed by 1 to 3 digits. Repeat this last group 3 times exactly.

Ansible multiple facts in Jinja template

I want to pull all the interface names from a host and then print all the information of that interface.
--- # Fetches network interfaces with IPs
- hosts: hta
gather_facts: yes
become: yes
tasks
- debug: msg=" {{ ansible_interfaces|length }}"
register: num
- name: moving template over to server
template: src=templates/network.j2 dest=/root/network_info.txt
And the network.j2 file
{% for int in ansible_interfaces %}
Interfaces: Interface-{{ int }}
Data: ansible_{{ int }}
{% endfor %}
So far i couldn't print the information and Ansible takes my input ansible_{{ int }} as literal.
The play below
- command: "ifconfig {{ item }}"
register: result
loop: "{{ ansible_interfaces }}"
- template:
src: template.j2
dest: int.txt
delegate_to: localhost
with this template
{% for int in result.results %}
Interfaces: Interface-{{ int.item }}
Data: {{ int.stdout }}
{% endfor %}
creates at localhost the file int.txt with the interfaces' data.
What I dont really get is that you are calling a server to gather info about its interfaces and send a file back to that same server with the info you could gather again any time. I don't really see the point but here we go.
Applying the KISS principle: call ifconfig which will return details about all the interfaces and store the result in a file on remote host
playbook.yml
- name: Simple interface info dump on hosts
hosts: whatevergroup_you_need
become: true
gather_facts: false
tasks:
- name: dump ifconfig result to /root/network_interface.txt
shell: ifconfig > /root/network_interfaces.txt
Notes:
become: true is only needed because you want to write your file in root's home. If you write the file anywhere else with proper permissions, ifconfig itself is executable by anyone
Since there is no need to collect any other info from the host, gather_facts: false will speed up the playbook for this one single easy task.
shell module is mandatory for the output redirection to the file. If you are concerned about security, you can use the command module instead (without the file redirection), capture the output with register and write the content to a file in a next task
I assumed you were calling a linux host and that ifconfig was outputing the info you need. If it is not the case, you need to rewrite your question and be more accurate about what you are trying to achieve.

Unable to update sshd config file with Ansible

I have followed the solution posted on the post
Ansible to update sshd config file however I am getting the following errors.
TASK [Add Group to AllowGroups]
fatal: [testpsr]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (lineinfile) module: when Supported parameters include: attributes, backrefs, backup, content, create, delimiter, directory_mode, firstmatch, follow, force, group, insertafter, insertbefore, line, mode, owner, path, regexp, remote_src, selevel, serole, setype, seuser, src, state, unsafe_writes, validate"}
Here are the tasks I have.
- name: Capture AllowUsers from sshd_config
command: bash -c "grep '^AllowUsers' /etc/ssh/sshd_config.bak"
register: old_userlist changed_when: no
- name: Add Group to AllowUsers
lineinfile: regexp: "^AllowUsers"
backup: True
dest: /etc/ssh/sshd_config.bak
line: "{{ old_userlist.stdout }} {{ usernames }}"
when: - old_userlist is succeeded
The error tells you whats wrong.
FAILED! => {"changed": false, "msg": "Unsupported parameters for (lineinfile) module: when
You nested when under lineinfile module, while it should be nested under the task itself.
This is your code fixed and probably what you meant.
- name: Capture AllowUsers from sshd_config
command: "grep '^AllowUsers' /etc/ssh/sshd_config.bak"
register: old_userlist
changed_when: no
- name: Add Group to AllowUsers
lineinfile:
regexp: "^AllowUsers"
backup: yes
dest: /etc/ssh/sshd_config.bak
line: "{{ old_userlist.stdout }} {{ usernames }}"
when: old_userlist is succeeded
I also fixed a couple of things. Using bash -c in command is redundant in your case
Please make sure you are using code formatting when pasting code or logs on StackOverflow, as your question is currently unreadable.

How to replace Windows-style CR/LF line endings with Linux-style endings in ansible?

I tried this in my task, but doesn't seem to work
- name: Fix line endings from CRLF to LF
local_action: replace dest={{my_dir}}/conf/{{item}} regexp='\r\n' replace='\n'
I usually do this using sed as follows and it works
sed -i 's/\r//g' file
I want to avoid using shell module to do this replacement as it throws a warning in ansible
You can remove the CRLF line endings with the -replace command. Your playbook might look like:
---
- hosts: all
tasks:
- local_action: replace dest={{my_dir}}/conf/{{item}} regexp="\r"
By not specifying the replace parameter in the - replace command, it will just remove all carriage returns. See http://docs.ansible.com/ansible/replace_module.html.
I tested this with a local file I created and it worked when testing on localhost. It also worked when I added localhost to the /etc/ansible/hosts file and had the following playbook instead:
---
- hosts: all
tasks:
- replace: dest={{my_dir}}/conf/{{item}} regexp="\r"
Just be sure to use the absolute filepath.
You can do something like this:
set_fact:
my_content: "{{ lookup('file', "{{my_dir}}/conf/{{item}}" ) | replace('\r\n', '\n')}}"
After this you can use the content or save in the disk.
The following converts line endings using the Jinja2 template engine. A line-ending directive is inserted at the beginning of the source file on the ansible machine (delegate_to: localhost). Sending the file to the downstream server can then be done by applying template or win_template to the file.
It handles source files with any line-ending, which could be useful if you're working through a list of files from more than one origin.
- name: prepare to add line endings
lineinfile:
insertbefore: BOF
dest: '{{ src_file }}'
line: '#jinja2: newline_sequence:"\n"'
#for Linux to Windows: #line: '#jinja2: newline_sequence:"\r\n"'
delegate_to: localhost
- name: copy changed file with correct line-endings
template: # win_template for Linux to Windows
src: '{{ src_file }}'
dest: '{{ dest_file }}'

Resources