Ansible: check whether a file contains only given lines, no more - security

By using Ansible, I try to make sure that the .ssh/authorized_keys files of our servers contain only a given set of ssh keys. No matter the arrangement.
If one is missing, add it (no problem, lineinfile)
If someone else sneaked in an extra key (which is not in the "with_items" list), remove it and return some warning, or something. Well... "changed" could be acceptable too, but it would be nice to differentiate somehow the "missing" and "sneaked in" lines.
The first part is easy:
- name: Ensure ssh authorized_keys contains the right users
lineinfile:
path: /root/.ssh/authorized_keys
owner: root
group: root
mode: 0600
state: present
line: '{{ item }}'
with_items:
- ssh-rsa AABBCC112233... root#someserver.com
- ssh-rsa DDEEFF112233... user#anothersomeserver.com
But the second part looks more tricky. At least to get it done with short and elegant code.
Any ideas?

There's authorized_key_module and it has exclusive option.
But pay attention that exclusive doesn't work with with_items.
Use something like this:
- name: Ensure ssh authorized_keys contains the right users
authorized_key:
user: root
state: present
exclusive: yes
key: '{{ ssh_keys | join("\n") }}'
vars:
ssh_keys:
- ssh-rsa AABBCC112233... root#someserver.com
- ssh-rsa DDEEFF112233... user#anothersomeserver.com
Test before use!
If you have keys in files, you can find this answer useful.

Related

Ansible copy file to similar path locations

Currently I am trying to replace all Tomcat keystore files in a particular location across multiple nodes. The problem is, the directory structures are similar, but not exactly the same.
For example, our tomcat directory structure looks like this:
/home/tomcat121test/jdk-11.0.7+10.
But across the different nodes, the paths are slightly different. The differences are the tomcat folder name and the jdk folder name.
The structure is /home/tomcat<version_no><test_or_prod>/jdk-<jdk_version> All in one word for each folder names.
e.g. /home/tomcat11test/jdk-11.0.7+10
So, the idea is to use cp as shown in the task named Backup the current keystore: cp -p /home/tomcat*/jdk*/keystore /home/tomcat*/jdk*/keystore_old_2021
My play book currently looks like this:
---
- name: Update Tomcat Test Servers Keystore
hosts: tomcattest_servers
gather_facts: False
tasks:
- name: ls the jdk dir
shell: ls -lah /home/tomcat*/jdk*/bin/
register: ls_command_output
- debug:
var: ls_command_output.stdout_lines
- name: Backup the current keystore
shell: >
cp -p /home/tomcat*/jdk*/keystore /home/tomcat*/jdk*/keystore_old_2021
- name: Verify copy took place
shell: ls -lah /home/tomcat*/jdk*/bin
register: ls_command_output
- debug:
var: ls_command_output.stdout_lines
Task names Backup the current keystore
is where it seems to be failing.
TASK [Backup the current keystore]
******************************************************************************************************************* fatal: [tomcattest1]: FAILED! => {"changed": true, "cmd": "cp -p
/home/tomcat*/jdk*/keystore /home/tomcat*/jdk*/keystore_old_2021\n",
"delta": "0:00:00.005322", "end": "2022-03-13 18:57:06.091283", "msg":
"non-zero return code", "rc": 1, "start": "2022-03-13
18:57:06.085961", "stderr": "cp: cannot stat
‘/home/tomcat*/jdk*/keystore’: No such file or directory",
"stderr_lines": ["cp: cannot stat ‘/home/tomcat*/jdk*/keystore’: No
such file or directory"], "stdout": "", "stdout_lines": []}
The task names ls the jdk dir works fine and they're both using the shell module, which, in my understanding, is needed if a wildcard needs to be used, instead of the command module.
Here is how I would rephrase, then approach your requirement.
Problem statement:
In /home, we have an unknown folder that match a pattern tomcat.* to find.
In the folder found here above, we have an unknown folder that match a pattern jdk.* to find.
In the folder found here above, I want to ship a new file and backup the state of the existing file prior to copying.
Applying the DRY (Do Not Repeat Yourself) principle:
Clearly the first and second point of our problem statement seem to be the same, so it would be nice if we could have some sort of mechanism that could answer the requirement: "For a given path, return me a unique folder matching a pattern".
Solution:
Ansible have multiple ways of helping you create sets of tasks that you can reuse. Here is a, not exhaustive, list of two of them:
roles: a quite extensive way to reuse multiple Ansible artefacts, including, but not limited to tasks, variables, handlers, files, etc.
the include_tasks module that allows you to load an arbitrary YAML containing a list of tasks
Because role is a quite extensive mechanism, it requires the creation of a set of folders that would be unrelated to this solution, so I am going to demonstrate this using the include_tasks module, but depending on your needs and reusability considerations, creating a role might be a better bet.
So, here is what would be the YAML that we would use in the include_tasks:
a find task based on a given folder
an extraction of the folder matching the given pattern out of the result of the find task, using the selectattr filter and the match test.
an assertion that we have a unique folder matching our pattern
This gives us a file, called here find_exactly_one_folder.yml:
- find:
path: "{{ root_folder }}"
file_type: directory
register: find_exactly_one_folder
- set_fact:
found_folder: >
{{
find_exactly_one_folder.files
| selectattr('path', 'match', root_folder)
| map(attribute='path')
}}
- assert:
that:
- found_folder | length == 1
fail_msg: >-
Did not found exactly one folder, result: `{{ found_folder }}`.
success_msg: >-
{{ found_folder.0 | default('') }} found
Now that we have that "For a given path, return me a unique folder matching a pattern" mechanism, we can have a playbook doing:
Find a unique folder matching the pattern tomcat.* from /home
Find a unique folder matching the pattern jdk.* from the folder resulting from the previous task
Copy the new file in the found folder, using the existing backup mechanism
This result in this set of tasks:
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: /home
folder_match: 'tomcat.*'
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: "{{ found_folder.0 }}"
folder_match: 'jdk.*'
- copy:
src: keystore
dest: "{{ found_folder.0 }}/keystore"
backup: true
Here is for an example playbook, that ends with an extra find and debug task to demonstrate the resulting backup file is being created.
- hosts: node1
gather_facts: no
tasks:
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: /home
folder_match: 'tomcat.*'
- include_tasks:
file: find_exactly_one_folder.yml
vars:
root_folder: "{{ found_folder.0 }}"
folder_match: 'jdk.*'
- copy:
src: keystore
dest: "{{ found_folder.0 }}/keystore"
backup: true
- find:
path: "{{ found_folder.0 }}"
pattern: "keystore*"
register: keystores
- debug:
var: keystores.files | map(attribute='path')
This would yield:
PLAY [node1] **************************************************************
TASK [include_tasks] ******************************************************
included: /usr/local/ansible/find_exactly_one_folder.yml for node1
TASK [find] ***************************************************************
ok: [node1]
TASK [set_fact] ***********************************************************
ok: [node1]
TASK [assert] *************************************************************
ok: [node1] => changed=false
msg: |-
/home/tomcat11test found
TASK [include_tasks] ******************************************************
included: /usr/local/ansible/find_exactly_one_folder.yml for node1
TASK [find] ***************************************************************
ok: [node1]
TASK [set_fact] ***********************************************************
ok: [node1]
TASK [assert] *************************************************************
ok: [node1] => changed=false
msg: |-
/home/tomcat11test/jdk-11.0.7+10 found
TASK [copy] **************************************************************
changed: [node1]
TASK [find] **************************************************************
ok: [node1]
TASK [debug] *************************************************************
ok: [node1] =>
keystores.files | map(attribute='path'):
- /home/tomcat11test/jdk-11.0.7+10/keystore.690.2022-03-13#22:11:08~
- /home/tomcat11test/jdk-11.0.7+10/keystore
After reviewing and reading what #β.εηοιτ.βε mentioned about using find, I went back and tried some other things before implementing what was mentioned. ( just needed something quick and fast )
- name: Find the tomcat*/jdk*/bin/keystore, make copy of
shell: find /home/ -iname keystore -exec cp -p "{}" "{}_old_2021" \;
- name: Check for copied keystore
shell: ls -lah /home/tomcat*/jdk*/bin/keystore*
register: ls_command_output
- debug:
var: ls_command_output.stdout_lines
This did exactly what I needed.
UPDATE:
While the above command worked, when it came time to use the copy module to copy the keystore that was going to be replaced ... the issue came up here:
- name: Copy New Keystore to Tomcat TEST servers
copy:
src: /opt/ansible/playbooks/ssl-renew/keystore
dest: /home/tomcat*/jdk*/bin/
Note the destination path. This did not work. I had to specify the specific path.
So I will be looking more into #β.εηοιτ.βε solution above.
I also wanted to say thank you for the detailed and descriptive response to my initial post.
I also looked into the fileglob module, but the first note on the ansible documentation page for fileglob, states:
"Patterns are only supported on files, not directory/paths."

Ansible lookup for particular key

I have to check the target vm's /etc/hosts file. If any ips which starts with 10...* Are there in that file.it should report yes and show the ips and if there is no ips .it should report No under that target hostname . All this information should come to build artifacts in azure pipelines.. please suggest me that possibilities
Using the file lookup was actually a pretty good start. But as all lookups, it only runs on the controller machine (localhost). If you need to run this on a remote target vm, you will have to read the file from there. The idea I followed below is:
Use the slurp module to get /etc/hosts content from the target in a variable on the controller
split the content of the file on the new line character to get a list of lines.
loop on those lines and add the matching ips to a list. The ips are searched using a regexp with the match test and the regex_search filter
show the content of the resulting list if that list is not empty.
The example playbook:
---
- name: Check ips starting with 10. in /etc/hosts
hosts: localhost
gather_facts: false
tasks:
- name: Slurp /etc/hosts content from target vm
slurp:
src: /etc/hosts
register: my_host_entries_slurped
- name: Read /etc/hosts file in a list line by line
set_fact:
my_host_entries: "{{ (my_host_entries_slurped.content | b64decode).split('\n') }}"
- name: Add matching ips to a list
vars:
ip_regex: "^10(\\.\\d{1,3}){3}"
set_fact:
matching_ips: "{{ matching_ips | default([]) + [item | regex_search(ip_regex)] }}"
when: item is match(ip_regex)
loop: "{{ my_host_entries }}"
- name: Show list of matching ips
debug:
var: matching_ips
when: matching_ips | default([]) | length > 0
You can adapt to match your exact needs.
Note: if you are not totally familiar with regexp, the one I used which is (without the escaped \\ in the yaml string)
^10(\.\d{1,3}){3}
means:
search for 10 at the beginning of the line folowed by a group of chars starting with a . and followed by 1 to 3 digits. Repeat this last group 3 times exactly.

Ansible gets error running command, No error if I enter it manually [duplicate]

Hi I am trying to find out how to set environment variable with Ansible.
something that a simple shell command like this:
EXPORT LC_ALL=C
tried as shell command and got an error
tried using the environment module and nothing happend.
what am I missing
There are multiple ways to do this and from your question it's nor clear what you need.
1. If you need environment variable to be defined PER TASK ONLY, you do this:
- hosts: dev
tasks:
- name: Echo my_env_var
shell: "echo $MY_ENV_VARIABLE"
environment:
MY_ENV_VARIABLE: whatever_value
- name: Echo my_env_var again
shell: "echo $MY_ENV_VARIABLE"
Note that MY_ENV_VARIABLE is available ONLY for the first task, environment does not set it permanently on your system.
TASK: [Echo my_env_var] *******************************************************
changed: [192.168.111.222] => {"changed": true, "cmd": "echo $MY_ENV_VARIABLE", ... "stdout": "whatever_value"}
TASK: [Echo my_env_var again] *************************************************
changed: [192.168.111.222] => {"changed": true, "cmd": "echo $MY_ENV_VARIABLE", ... "stdout": ""}
Hopefully soon using environment will also be possible on play level, not only task level as above.
There's currently a pull request open for this feature on Ansible's GitHub: https://github.com/ansible/ansible/pull/8651
UPDATE: It's now merged as of Jan 2, 2015.
2. If you want permanent environment variable + system wide / only for certain user
You should look into how you do it in your Linux distribution / shell, there are multiple places for that. For example in Ubuntu you define that in files like for example:
~/.profile
/etc/environment
/etc/profile.d directory
...
You will find Ubuntu docs about it here: https://help.ubuntu.com/community/EnvironmentVariables
After all for setting environment variable in ex. Ubuntu you can just use lineinfile module from Ansible and add desired line to certain file. Consult your OS docs to know where to add it to make it permanent.
I did not have enough reputation to comment and hence am adding a new answer.
Gasek answer is quite correct. Just one thing: if you are updating the .bash_profile file or the /etc/profile, those changes would be reflected only after you do a new login.
In case you want to set the env variable and then use it in subsequent tasks in the same playbook, consider adding those environment variables in the .bashrc file.
I guess the reason behind this is the login and the non-login shells.
Ansible, while executing different tasks, reads the parameters from a .bashrc file instead of the .bash_profile or the /etc/profile.
As an example, if I updated my path variable to include the custom binary in the .bash_profile file of the respective user and then did a source of the file.
The next subsequent tasks won't recognize my command. However if you update in the .bashrc file, the command would work.
- name: Adding the path in the bashrc files
lineinfile: dest=/root/.bashrc line='export PATH=$PATH:path-to-mysql/bin' insertafter='EOF' regexp='export PATH=\$PATH:path-to-mysql/bin' state=present
- - name: Source the bashrc file
shell: source /root/.bashrc
- name: Start the mysql client
shell: mysql -e "show databases";
This would work, but had I done it using profile files the mysql -e "show databases" would have given an error.
- name: Adding the path in the Profile files
lineinfile: dest=/root/.bash_profile line='export PATH=$PATH:{{install_path}}/{{mysql_folder_name}}/bin' insertafter='EOF' regexp='export PATH=\$PATH:{{install_path}}/{{mysql_folder_name}}/bin' state=present
- name: Source the bash_profile file
shell: source /root/.bash_profile
- name: Start the mysql client
shell: mysql -e "show databases";
This one won't work, if we have all these tasks in the same playbook.
Here's a quick local task to permanently set key/values on /etc/environment (which is system-wide, all users, thus become is needed):
- name: populate /etc/environment
lineinfile:
path: "/etc/environment"
state: present
regexp: "^{{ item.key }}="
line: "{{ item.key }}={{ item.value}}"
with_items: "{{ os_environment }}"
become: yes
and the vars for it:
os_environment:
- key: DJANGO_SETTINGS_MODULE
value : websec.prod_settings
- key: DJANGO_SUPER_USER
value : admin
and, yes, if you ssh out and back in, env shows the new environment variables.
p.s. It used to be dest as in:
dest: "/etc/environment"
but see the comment
For the task only: inlining works, some of the time.
——————-
Note: the stuff below is more an observation/experiment than a recommendation.
——————-
The first task is the equivalent to Michael's top voted answer.
The second doesn't work, but then again foo=1 echo $foo doesn't work in bash either (I suspect that's because echo is a builtin).
The third does work, as it does in bash, and takes very little effort. However... when I tried doing this to set a node variable, it failed miserably until I used Michael's answer.
tasks:
- name: Echo my_env_var
shell: "echo $MY_ENV_VARIABLE"
environment:
MY_ENV_VARIABLE: value1
- name: Echo my_env_var inline, doesnt work in bash either
shell: "MY_ENV_VARIABLE=value2 echo $MY_ENV_VARIABLE"
- name: set my_env_var inline then env
shell: "MY_ENV_VARIABLE=value3 env | egrep MY_ENV"
output:
TASK [Echo my_env_var] *********************************************************
changed: [192.168.63.253] => changed=true
cmd: echo $MY_ENV_VARIABLE
stdout: value1
TASK [Echo my_env_var inline, doesnt work in bash either] **********************
changed: [192.168.63.253] => changed=true
cmd: MY_ENV_VARIABLE=value2 echo $MY_ENV_VARIABLE
stdout: ''
TASK [set my_env_var inline then env] ******************************************
changed: [192.168.63.253] => changed=true
cmd: MY_ENV_VARIABLE=value3 env | egrep MY_ENV
stdout: MY_ENV_VARIABLE=value3
This is the best option. As said Michal Gasek (first answer), since the pull request was merged (https://github.com/ansible/ansible/pull/8651),
we are able to set permanent environment variables easily by play level.
- hosts: all
roles:
- php
- nginx
environment:
MY_ENV_VARIABLE: whatever_value
For persistently setting environment variables, you can use one of the existing roles over at Ansible Galaxy. I recommend weareinteractive.environment.
Using ansible-galaxy:
$ ansible-galaxy install weareinteractive.environment
Using requirements.yml:
- src: franklinkim.environment
Then in your playbook:
- hosts: all
sudo: yes
roles:
- role: franklinkim.environment
environment_config:
NODE_ENV: staging
DATABASE_NAME: staging
I'm installing krew plugin manager and some of its plugin using ansible for a Fish shell. To install the plugins, I wanted to use the $PATH value as set by my previous task in /.config/fish/config.fish.
Error
Using shell module with executable parameter was throwing:
stderr: |-
error: unknown command "krew" for "kubectl"
error: unknown command "krew" for "kubectl"
error: unknown command "krew" for "kubectl"
Solution
add a line declaring the path to the krew bin in /.config/fish/config.fish
- name: Add krew to $PATH
lineinfile:
path: '{{ home }}/.config/fish/config.fish'
search_string: krew
line: set --append --export --global PATH $HOME/.krew/bin
Then, use the shell module to source Fish config file and run kubectl commands to install my plugins:
- name: Install `kubectx` (context) and `kudens` (namespace)
shell: |
source {{ home }}/.config/fish/config.fish
kubectl krew install ctx
kubectl krew install ns
kubectl krew install oidc-login
args:
executable: /usr/bin/fish
Disclaimer
Look a bit hack-ish to me, would love more ansible-ish solution.
You can also set env with a custom become plugin
hosts setting
ansible_become=yes
ansible_become_method=foo
become_plugins/foo.py
from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
def build_become_command(self, cmd, shell):
cmd = 'PYTHONPATH="/foo/bar:$PYTHONPATH" ' + cmd
return cmd

Unable to update sshd config file with Ansible

I have followed the solution posted on the post
Ansible to update sshd config file however I am getting the following errors.
TASK [Add Group to AllowGroups]
fatal: [testpsr]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (lineinfile) module: when Supported parameters include: attributes, backrefs, backup, content, create, delimiter, directory_mode, firstmatch, follow, force, group, insertafter, insertbefore, line, mode, owner, path, regexp, remote_src, selevel, serole, setype, seuser, src, state, unsafe_writes, validate"}
Here are the tasks I have.
- name: Capture AllowUsers from sshd_config
command: bash -c "grep '^AllowUsers' /etc/ssh/sshd_config.bak"
register: old_userlist changed_when: no
- name: Add Group to AllowUsers
lineinfile: regexp: "^AllowUsers"
backup: True
dest: /etc/ssh/sshd_config.bak
line: "{{ old_userlist.stdout }} {{ usernames }}"
when: - old_userlist is succeeded
The error tells you whats wrong.
FAILED! => {"changed": false, "msg": "Unsupported parameters for (lineinfile) module: when
You nested when under lineinfile module, while it should be nested under the task itself.
This is your code fixed and probably what you meant.
- name: Capture AllowUsers from sshd_config
command: "grep '^AllowUsers' /etc/ssh/sshd_config.bak"
register: old_userlist
changed_when: no
- name: Add Group to AllowUsers
lineinfile:
regexp: "^AllowUsers"
backup: yes
dest: /etc/ssh/sshd_config.bak
line: "{{ old_userlist.stdout }} {{ usernames }}"
when: old_userlist is succeeded
I also fixed a couple of things. Using bash -c in command is redundant in your case
Please make sure you are using code formatting when pasting code or logs on StackOverflow, as your question is currently unreadable.

Ansible uncomment line in file

I want to uncomment a line in file sshd_config by using Ansible and I have the following working configuration:
- name: Uncomment line from /etc/ssh/sshd_config
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#AuthorizedKeysFile'
line: 'AuthorizedKeysFile .ssh/authorized_keys'
However this config only works if the line starts by #AuthorizedKeysFile, but it won't work if the line starts by # AuthorizedKeysFile or # AuthorizedKeysFile (spaces between # and the words).
How can I configure the regexp so it won't take into account any number of spaces after '#'?
I've tried to add another lineinfile option with a space after '#', but this is not a good solution:
- name: Uncomment line from /etc/ssh/sshd_config
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '# AuthorizedKeysFile'
line: 'AuthorizedKeysFile .ssh/authorized_keys'
If you need zero or more white spaces after the '#' character, the following should suffice:
- name: Uncomment line from /etc/ssh/sshd_config
lineinfile:
dest: /etc/ssh/sshd_config
regexp: '^#\s*AuthorizedKeysFile.*$'
line: 'AuthorizedKeysFile .ssh/authorized_keys'
The modification to your original code is the addition of the \s* and the .*$ in the regex.
Explanation:
\s - matches whitespace (spaces, tabs, line breaks and form feeds)
* - specifies that the expression to it's left (\s) can have zero or more instances in a match
.* - matches zero or more of any character
$ - matches the end of the line
Firstly, you are using the wrong language. With Ansible, you don't tell it what to do, but define the desired state. So it shouldn't be Uncomment line form /etc/ssh/sshd_config, but Ensure AuthorizedKeysFile is set to .ssh/authorized_keys.
Secondly, it doesn't matter what the initial state is (if the line is commented, or not). You must specify a single, unique string that identifies the line.
With sshd_config this is possible as the AuthorizedKeysFile directive occurs only once in the file. With other configuration files this might be more difficult.
- name: Ensure AuthorizedKeysFile is set to .ssh/authorized_keys
lineinfile:
dest: /etc/ssh/sshd_config
regexp: AuthorizedKeysFile
line: 'AuthorizedKeysFile .ssh/authorized_keys'
It will match any line containing AuthorizedKeysFile string (no matter if it's commented or not, or how many spaces are there) and ensure the full line is:
AuthorizedKeysFile .ssh/authorized_keys
If the line were different, Ansible will report "changed" state.
On the second run, Ansible will find the AuthorizedKeysFile again and discover the line is already in the desired state, so it will end the task with "ok" state.
One caveat with the above task is that if any of the lines contains a comment such as a real, intentional comment (for example an explanation in English containing the string AuthorizedKeysFile), Ansible will replace that line with the value specified in line.
I should caveat this with #techraf's point that 99% of the time a full template of a configuration file is almost always better.
Times I have done lineinfile include weird and wonderful configuration files that are managed by some other process, or laziness for config I don't fully understand yet and may vary by distro/version and I don't want to maintain all the variants... yet.
Go forth and learn more Ansible... it is great because you can keep iterating on it from raw bash shell commands right up to best practice.
lineinfile module
Still good to see how best to configuration manage one or two settings just a little better with this:
tasks:
- name: Apply sshd_config settings
lineinfile:
path: /etc/ssh/sshd_config
# might be commented out, whitespace between key and value
regexp: '^#?\s*{{ item.key }}\s'
line: "{{ item.key }} {{ item.value }}"
validate: '/usr/sbin/sshd -T -f %s'
with_items:
- key: MaxSessions
value: 30
- key: AuthorizedKeysFile
value: .ssh/authorized_keys
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
validate don't make the change if the change is invalid
notify/handlers the correct way to restart once only at the end
with_items (soon to become loop) if you have multiple settings
^#? the setting might be commented out - see the other answer
\s*{{ item.key }}\s will not match other settings (i.e. SettingA cannot match NotSettingA or SettingAThisIsNot)
Still might clobber a comment like # AuthorizedKeysFile - is a setting which we have to live with because there could be a setting like AuthorizedKeysFile /some/path # is a setting... re-read the caveat.
template module
- name: Configure sshd
template:
src: sshd_config.j2
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: "0644"
validate: '/usr/sbin/sshd -T -f %s'
notify: restart sshd
handlers:
- name: restart sshd
service:
name: sshd
state: restarted
multiple distro support
And if you are not being lazy about supporting all your distros see this tip
- name: configure ssh
template: src={{ item }} dest={{ SSH_CONFIG }} backup=yes
with_first_found:
- "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.sshd_config.j2"
- "{{ ansible_distribution }}.sshd_config.j2"
https://ansible-tips-and-tricks.readthedocs.io/en/latest/modifying-files/modifying-files/
(needs to be updated to a loop using the first_found lookup)
Is it possible to achieve the same goal with replace module.
https://docs.ansible.com/ansible/latest/modules/replace_module.html
- name: Uncomment line from /etc/ssh/sshd_config
replace:
path: /etc/ssh/sshd_config
regexp: '^\s*#+AuthorizedKeysFile.*$'
replace: 'AuthorizedKeysFile .ssh/authorized_keys'
If you want to simply uncomment a line without setting the value, you can use replace with backreferences, eg (with a handy loop):
- name: Enable sshd AuthorizedKeysFile
replace:
path: /etc/ssh/sshd_config
# Remove comment and first space from matching lines
regexp: '^#\s?(\s*){{ item }}(.+)$'
replace: '\1{{ item }}\2'
loop:
- 'AuthorizedKeysFile'
This will only remove the first space after the #, and so retain any original indenting. It will also retain anything after the key (eg the default setting, and any following comments)
Thanks to the other helpful answers that provided a solid starting point.

Resources