ansible set fact from another task output - linux

I have trouble some ansible modules. I wrote the custom module and its output like this:
ok: [localhost] => {
"msg": {
"ansible_facts": {
"device_id": "/dev/sdd"
},
"changed": true,
"failed": false
}
}
my custom module:
#!bin/env python
from ansible.module_utils.basic import *
import json
import array
import re
def find_uuid():
with open("/etc/ansible/roles/addDisk/library/debug/disk_fact.json") as disk_fact_file, open("/etc/ansible/roles/addDisk/library/debug/device_links.json") as device_links_file:
disk_fact_data = json.load(disk_fact_file)
device_links_data = json.load(device_links_file)
device = []
for p in disk_fact_data['guest_disk_facts']:
if disk_fact_data['guest_disk_facts'][p]['controller_key'] == 1000 :
if disk_fact_data['guest_disk_facts'][p]['unit_number'] == 3:
uuid = disk_fact_data['guest_disk_facts'][p]['backing_uuid'].split('-')[4]
for key, value in device_links_data['ansible_facts']['ansible_device_links']['ids'].items():
for d in device_links_data['ansible_facts']['ansible_device_links']['ids'][key]:
if uuid in d:
if key not in device:
device.append(key)
if len(device) == 1:
json_data={
"device_id": "/dev/" + device[0]
}
return True, json_data
else:
return False
check, jsonData = find_uuid()
def main():
module = AnsibleModule(argument_spec={})
if check:
module.exit_json(changed=True, ansible_facts=jsonData)
else:
module.fail_json(msg="error find device")
main()
I want to use device_id variable on the other tasks. I think handle with module.exit_json method but how can I do that?

I want to use device_id variable on the other tasks
The thing you are looking for is register: in order to make that value persist into the "host facts" for the hosts against which that task ran. Then, you can go with "push" model in which you set that fact upon every other host that interests you, or you can go with a "pull" model wherein interested hosts can reach out to get the value at the time they need it.
Let's look at both cases, for comparison.
First, capture that value, and I'll use a host named "alpha" for ease of discussion:
- hosts: alpha
tasks:
- name: find the uuid task
# or whatever you have called your custom task
find_uuid:
register: the_uuid_result
Now the output is available is available on the host "alpha" as {{ vars["the_uuid_result"]["device_id"] }} which will be /dev/sdd in your example above. One can also abbreviate that as {{ the_uuid_result.device_id }}
In the "push" model, you can now iterate over all hosts, or just those in a specific group, that should also receive that device_id fact; for this example, let's target an existing group of hosts named "need_device_id":
- hosts: alpha # just as before, but for context ...
tasks:
- find_uuid:
register: the_uuid_result
# now propagate out the value
- name: declare device_id fact on other hosts
set_fact:
device_id: '{{ the_uuid_result.device_id }}'
delegate_to: '{{ item }}'
with_items: '{{ groups["need_device_id"] }}'
And, finally, in contrast, one can reach over and pull that fact if host "beta" needs to look up the device_id that host "alpha" discovered:
- hosts: alpha
# as before
- hosts: beta
tasks:
- name: set the device_id fact on myself from alpha
set_fact:
device_id: '{{ hostvars["alpha"]["the_uuid_result"]["device_id"] }}'
You could also run that same set_fact: device_id: business on "alpha" in order to keep the "local" variable named the_uuid_result from leaking out of alpha's playbook. Up to you.

Related

How can I pass a varialbe set by the set_fact module to the Jinja2 template?

I have a role to setup NATS cluster,, I've used host_vars to define which node is the master node like below:
is_master: true
Then in the setup-nats.yml task, I used the following to extract the master node's IP address based on the host_var I've set and then used it as a variable for the Jinja2 template, however, the variable doesn't get passed down to the template and I get the `variable 'master_ip' is undefined.
- name: Set master IP
set_fact:
set_master_ip: "{{ ansible_facts['default_ipv4']['address'] }}"
cacheable: yes
when: is_master
- name: debug
debug:
msg: "{{ set_master_ip }}"
run_once: true
- name: generate nats-server.conf for the slave nodes
template:
src: nats-server-slave.conf.j2
dest: /etc/nats-server.conf
owner: nats
group: nats
mode: 0644
when:
- is_master == false
vars:
master_ip: "{{ set_master_ip }}"
notify: nats-server
The variable is used like below in the Jinja2 template:
routes = [
nats-route://ruser:{{ nats_server_password }}#{{ master_ip }}:6222
]
}
Questions:
Is this approach according to the best practices?
What is the correct way of doing the above so the variable is passed down to the template?
Test Output:
I'm using Molecule to test my Ansible and even though in the debug task the IP address is visible, it doesn't get passed down to the template:
TASK [nats : Set master IP] ****************************************************
ok: [target1]
skipping: [target2]
skipping: [target3]
TASK [nats : debug] ************************************************************
ok: [target1] =>
msg: 10.0.2.15
TASK [nats : generate nats-server.conf for the slave nodes] ********************
skipping: [target1]
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: {{ set_master_ip }}: 'set_master_ip' is undefined
fatal: [target2]: FAILED! => changed=false
msg: 'AnsibleUndefinedVariable: {{ set_master_ip }}: ''set_master_ip'' is undefined'
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: {{ set_master_ip }}: 'set_master_ip' is undefined
fatal: [target3]: FAILED! => changed=false
msg: 'AnsibleUndefinedVariable: {{ set_master_ip }}: ''set_master_ip'' is undefined'
Any help is appreciated, thanks in advance.
UPDATE: I suspect the issue has something to do with the variable scope being in the host context but cannot find a way to fix it ( I might be wrong though).
Far from being best practice IMO but answering your direct question. Your problem is not passing the variable to your template but the fact it is not assigned to all hosts in your play loop (and hence is undefined on any non master node). The following (untested) addresses that issue keeping the same task structure.
- name: Set master IP for all nodes
ansible.builtin.set_fact:
master_ip: "{{ hostvars | dict2items | map(attribute='value'
| selectattr('is_master', 'defined') | selectattr('is_master')
| map(attribute='ansible_facts.default_ipv4.address') | first }}"
cacheable: yes
run_once: true
- name: Show calculated master IP (making sure it is assigned everywhere)
ansible.builtin.debug:
msg: "{{ master_ip }}"
- name: generate nats-server.conf for the slave nodes
ansible.builtin.template:
src: nats-server-slave.conf.j2
dest: /etc/nats-server.conf
owner: nats
group: nats
mode: 0644
when: not is_master | bool
notify: nats-server
Ideas for enhancement (non exhaustive):
Select your master based on a group membership in the inventory rather than on a host attribute. This makes gathering the ip easier (e.g. master_ip: "{{ hostvars[groups.master | first].ansible_facts.default_ipv4.address }}"
Set the ip as a play var or directly inside the inventory for the node group rather than in a set_fact task.

Sharing wireguard public keys ansible.posix.synchronize:

I've just started get into ansible so can you please help me or maybe give some advice?
The point is that i`m trying to install and configurate wireguard with ansible-playbook (just in case i know how to configure wireguard without ansible)
So i want to share public keys through ansible
(and then read them in wg0.conf by PublicKey = {{ lookup('file', '/etc/wireguard/publickey_client') }} )
I'm trying to use ansible.posix.synchronize in my playbook, but when it goes to task "sharing keys" it just start thinking but don't do anything (for a long time) till i stop the proccess.
Starting playbook with -vv also don't show anything
Playbook wireguard_configuration.yml:
---
- hosts: client
name: make wg keys on client
become: true
tasks:
- name: wg0.conf client file
ansible.builtin.copy:
src: /etc/ansible/conf/wg0_client.conf
dest: /etc/wireguard/wg0.conf
mode: 0755
owner: owner
- name: creating wg keys on client
ansible.builtin.shell:
cmd: wg genkey | tee privatekey_client | wg pubkey > publickey_client
chdir: /etc/wireguard
- name: share pubkey from client to server
ansible.posix.synchronize:
src: /etc/wireguard/publickey_client
dest: /etc/wireguard/publickey_client
delegate_to: server
- hosts: server
name: make wg keys on server
become: true
tasks:
- name: wg0.conf server file
ansible.builtin.copy:
src: /etc/ansible/conf/wg0_server.conf
dest: /etc/wireguard/wg0.conf
mode: 0755
owner: owner
- name: creating wg keys on client
ansible.builtin.shell:
cmd: wg genkey | tee privatekey_server | wg pubkey > publickey_server
chdir: /etc/wireguard
- name: share pubkey from server to client
ansible.posix.synchronize:
src: /etc/wireguard/publickey_server
dest: /etc/wireguard/publickey_server
delegate_to: client
You don't need the synchronize module here: you're not trying to copy a large hierarchy of files; you're only trying to bring a single value from the client to the server. I think a better option is just to stick that value in a variable on the client and then access it via hostvars on the server.
The following playbook is one way of doing that. A few things to note:
I've tried to document the tasks, but let me know if something isn't clear.
This playbook is written to be idempotent: you can run it multiple times and it will only generate the private key once.
- hosts: client
gather_facts: false
become: true
tasks:
# Read an existing private key if it is available. We set
# failed_when to false because an "error" simply means that
# the key doesn't exist and we need to generate it.
- name: read private key
command: cat /etc/wireguard/privatekey_client
failed_when: false
changed_when: wg_private_read.rc != 0
register: wg_private_read
# Generate a new key if necessary. We used the "is changed" test
# here so that we only generate a new key if we failed to read an
# existing key in the previous task.
- name: generate private key
when: wg_private_read is changed
command: wg genkey
register: wg_private_create
# This will either create the privatekey_client file or leave it
# unmodified (because the content matches what we read from it
# earlier in the "read private key" task).
- name: write private key
when: wg_private_read is changed
copy:
content: "{{ wg_private_create.stdout }}"
dest: /etc/wireguard/privatekey_client
# We generate a public key but we don't bother writing it to disk.
# The client doesn't need it and we can always generate it from
# the private key.
- name: generate public key
shell:
cmd: wg pubkey
stdin: "{{ (wg_private_read is changed)|ternary(wg_private_create.stdout, wg_private_read.stdout) }}"
changed_when: false
register: wg_public
- hosts: server
gather_facts: false
become: true
tasks:
- name: write client public key
copy:
content: "{{ hostvars.client.wg_public.stdout }}"
dest: "/etc/wireguard/publickey_client"
Some useful documentation links:
About failed_when and changed_when
The ternary filter

Filter data using JSON query in Ansible to extract data from an ansible_fact

I have created this playbook to extract all mount points starting with any element in the variable whitelist matching the type= ext2, ext3, ext4.
The problem is that I can get all mount_points but I am not able to filter the result with the variables.
- hosts: all
gather_facts: True
become: True
vars:
whitelist:
- /boot
- /home
- /opt
- /var
- /bin
- /usr
tasks:
- name: extract mount_points
set_fact:
mount_point: "{{ansible_facts.mounts | selectattr('fstype', 'in', ['ext2', 'ext3', 'ext4']) | map(attribute='mount') | list }}"
- debug:
var: mount_point
vars:
query: "[?starts_with(mount, whitelist)].mount"
When I execute the playbook I get this
ok: [ansible#controller] => {
"mount_point": [
"/",
"/boot",
"/tmp",
"/home",
"/var",
"/var/opt",
"/var/tmp",
"/var/log",
"/var/log/audit",
"/opt",
]
}
/tmp is included which means that query: "[?starts_with(mount, whitelist)].mount" was skipped and I don't know how to achieve the playbook goal.
You don't really need a json query here IMO. An easy way is to filter the list with match and construct a regex containing all possible prefixes:
- name: show my filtered mountpoints:
vars:
start_regex: "{{ whitelist | map('regex_escape') | join('|') }}"
degug:
msg: "{{ {{ansible_facts.mounts | selectattr('fstype', 'in', ['ext2', 'ext3', 'ext4'])
| map(attribute='mount') | select('match', start_regex) | list }}"
While testing i used a filter expression within the json_query string which achieved the same goal for me Query String with Dynamic Variable
.
In this example the filter expression is used with a built-in function, ?starts_with. This function provides a boolean result, based on a match with a search string, on any element within the fstype and mount_point using a dynamic variable {{whitelist}}.
- name: find mount_points
debug:
msg: "{{ ansible_mounts | to_json | from_json | json_query(mount_point) }}"
vars:
mount_point: " #[?starts_with(fstype, 'ext')] | #[?starts_with(mount, '{{item}}')].mount"
loop: "{{whitelist}}"
this is the output
ok: [ansible#ansible1] => (item=/var) => {
"msg": [
"/var",
"/var/opt",
"/var/tmp",
"/var/log",
"/var/log/audit"
}
ok: [ansible#ansible1] => (item=/usr) => {
"msg": []
}
ok: [ansible#ansible1] => (item=/home) => {
"msg": [
"/home"
>>>

How to iterate all the variables from a variables template file in azure pipeline?

test_env_template.yml
variables:
- name: DB_HOSTNAME
value: 10.123.56.222
- name: DB_PORTNUMBER
value: 1521
- name: USERNAME
value: TEST
- name: PASSWORD
value: TEST
- name: SCHEMANAME
value: SCHEMA
- name: ACTIVEMQNAME
value: 10.123.56.223
- name: ACTIVEMQPORT
value: 8161
and many more variables in the list.
I wanted to iterate through all the variables in the test_env_template.yml using a loop to replace the values in a file, Is there a way to do that rather than calling each values separately like ${{ variables.ACTIVEMQNAME}} as the no. of variables in the template is dynamic.
In short no. There is no easy way to get your azure pipeline variables specific to tamplet variable. You can get env variables, but there you will get regular env variables and pipeline variables mapped to env variables.
You can get it via env | sort but I'm pretty sure that this is not waht you want.
You can't display variables specific to template but you can get all pipeline variables in this way:
steps:
- pwsh:
Write-Host "${{ convertToJson(variables) }}"
and then you will get
{
system: build,
system.hosttype: build,
system.servertype: Hosted,
system.culture: en-US,
system.collectionId: be1a2b52-5ed1-4713-8508-ed226307f634,
system.collectionUri: https://dev.azure.com/thecodemanual/,
system.teamFoundationCollectionUri: https://dev.azure.com/thecodemanual/,
system.taskDefinitionsUri: https://dev.azure.com/thecodemanual/,
system.pipelineStartTime: 2021-09-21 08:06:07+00:00,
system.teamProject: DevOps Manual,
system.teamProjectId: 4fa6b279-3db9-4cb0-aab8-e06c2ad550b2,
system.definitionId: 275,
build.definitionName: kmadof.devops-manual 123 ,
build.definitionVersion: 1,
build.queuedBy: Krzysztof Madej,
build.queuedById: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedFor: Krzysztof Madej,
build.requestedForId: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedForEmail: krzysztof.madej#hotmail.com,
build.sourceVersion: 583a276cd9a0f5bf664b4b128f6ad45de1592b14,
build.sourceBranch: refs/heads/master,
build.sourceBranchName: master,
build.reason: Manual,
system.pullRequest.isFork: False,
system.jobParallelismTag: Public,
system.enableAccessToken: SecretVariable,
DB_HOSTNAME: 10.123.56.222,
DB_PORTNUMBER: 1521,
USERNAME: TEST,
PASSWORD: TEST,
SCHEMANAME: SCHEMA,
ACTIVEMQNAME: 10.123.56.223,
ACTIVEMQPORT: 8161
}
If you prefix them then you can try to filter using jq.

Ansible - Access output of a shell command with with_items

I wrote a python script which gets executed via my ansible playbook and returns the following output via stdout:
- { username: ansible, admin: yes}
- { username: test, admin: no }
The output then should get saved in the variable "users" and with the "with_items" (or the newer "loop" conditional) I want to iterate through the variable in order to assign the right permissions for each user separately:
- name: execute python script
command: "python3 /tmp/parse_string.py --user_permissions={{ user_permissions }}"
register: output
- name: register
set_fact:
users: "{{ output.stdout }}"
- name: output
debug: msg="{{ users }}"
- name: Add user to group -admin
user:
name={{ item.username }}
groups=admin
append=yes
state=present
when: "item.admin == yes"
with_items: '{{users}}
However when launching the playbook it says that the variable "users" has no attribute "username".
TASK [create_users : output] ***************************************************
ok: [ansible] => {
"msg": "- { username: ansible, admin: yes }\n- { username: test, admin: no }\n- { username: test2, admin: no }"
}
TASK [create_users : Add user to group -admin ***************
fatal: [ansible]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'username'\n\nThe error appears to be in '***': line 29, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \n ^ here\n"}
Can anyone help me with this case?
BR
You are setting your users var to a string. It happens that this string is a yaml representation of a datastructure but ansible has no clue about that at this point.
To achieve your requirement, you need to parse that string as yaml and register the result. Luckily, there is a from_yaml filter for that purpose
You just have to modify your set_fact task as the following and everything should work as expected:
- name: register
set_fact:
users: "{{ output.stdout | from_yaml }}"

Resources