How to get unmounted device from Ansible facts - linux

I would like to implement such demands via Ansible playbook
1. Get Ansible host facts
2. Run through "ansible_mounts.device" using ansible_device
3. If any device is not in ansible.mounts.device then print them into a file.
Below is my playbook:
- hosts: all
become: true
tasks:
- name: list all mounted device
shell: /bin/echo {{ item.device }} >> /root/mounted
with_items: "{{ ansible_mounts }}"
register: mounted_device
- name: list all umount disks
shell: /bin/echo {{ item }}
with_items: "{{ ansible_devices.keys() }}"
when: '{{ item }} not in {{ mounted_device }} '
However, the mounted_device is always a list of all information in ansible_mounts element which I thought it should be list of devices like "/dev/xvda1". Actually, in /root/mounted it is "/dev/xvda1"
Can anyone please help with this? Or is there any more brilliant way to achieve the goal?

Whilst you could get something to work using the approach you are taking, I would not recommend it as it will be complicated and fragile.
AWS provides a special API endpoint that will expose information about your running instance. This endpoint is accessible (from your running instance) at http://169.254.169.254.
Information about block devices is located at http://169.254.169.254/latest/meta-data/block-device-mapping/ which will give you a list of block devices. The primary block device is named 'ami', and then any subsequent EBS volumes are named 'ebs2', 'ebs3', ..., 'ebsn'. You can then visit http://169.254.169.254/latest/meta-data/block-device-mapping/ebs2 which will simply return the OS device name mapped to that block device (i.e. 'sdb').
Taking this info, here is some example code to access the data for the 1st additional EBS volume:
- name: Set EBS name to query
set_fact:
ebs_volume: ebs2
- name: Get device mapping data
uri:
url: "http://169.254.169.254/latest/meta-data/block-device-mapping/{{ ebs_volume }}"
return_content: yes
register: ebs_volume_data
- name: Display returned data
debug:
msg: "{{ ebs_volume_data.content }}"

Related

The best way to authorize ssh key of each node to all nodes in the cluster

I want to create a cluster infrastructure that each node communicates with others over shh. I want to use ansible to create a idempotent playbook/role that can be executed when cluster initialized or new nodes added to cluster. I was able to think of 2 scenarios to achieve this.
 First Scenario
task 1 fetches the ssh key from a node (Probably assigns it to a variable or writes to a file).
Then task 2 that executed locally loops over other nodes and authorizes the first node with fetched key.
This scenario supports free strategy. Tasks can be executed without waiting for all hosts. But it also requires all nodes to have related user and public key. Because if you are creating users within the same playbook (due to free strategy), when the task 2 starts running there may be users that are not created on other nodes in the cluster.
Although i am a big fan of free strategy, i din't implement this scenario due to efficiency reasons. It makes connections for node cluster.
 Second Scenario
task 1 fetches the ssh key from all nodes in order. Then writes each one to a file which name is set according to ansible_hostname.
Then task 2 that executed locally loops over other nodes and authorizes all keys.
This scenario only supports linear strategy. You can create users within same playbook thanks to linear strategy, all users will be created before task 1 starts running.
I think it is an efficient scenario. It makes only connections for node cluster. I did implement it and i put the snippet i wrote.
---
- name: create node user
user:
name: "{{ node_user }}"
password: "{{ node_user_pass |password_hash('sha512') }}"
shell: /bin/bash
create_home: yes
generate_ssh_key: yes
- name: fetch all public keys from managed nodes to manager
fetch:
src: "/home/{{ node_user }}/.ssh/id_rsa.pub"
dest: "tmp/{{ ansible_hostname }}-id_rsa.pub"
flat: yes
- name: authorize public key for all nodes
authorized_key:
user: "{{ node_user }}"
key: "{{ lookup('file', 'tmp/{{ item }}-id_rsa.pub')}}"
state: present
with_items:
- "{{ groups['cluster_node'] }}"
- name: remove local public key copies
become: false
local_action: file dest='tmp/' state=absent
changed_when: false
run_once: true
Maybe i can use lineinfile instead of fetch but other than that i don't know if it is the right way. It takes so long when cluster size getting larger (Because of the linear strategy). Is there a more efficient way that i can use?
When Ansible loops through authorized_key, it will (roughly) perform the following tasks:
Create a temporary authorized_key python script on the control node
Copy the new authorized_key python script to the managed node
Run the authorized_key python script on the managed node with the appropriate parameters
This increases n2 as the number of managed nodes increases; with 1000 boxes, this task is performed 1000 times per box.
I'm having trouble finding specific docs which properly explains exactly what's going on under-the-hood, so I'd recommend running an example script get a feel for it:
- hosts: all
tasks:
- name: do thing
shell: "echo \"hello this is {{item}}\""
with_items:
- alice
- brian
- charlie
This should be ran with the triple verbose flag (-vvv) and with the output piped to ./ansible.log (ex. ansible-playbook example-loop.yml -i hosts.yml -vvv > example-loop-output.log). Searching through those logs for command.py and sftp will help get a feel for how your script scales as the list retrieved by "{{ groups['cluster_node'] }}" increases.
For small clusters, this inefficiency is perfectly acceptable. However, it may become problematic on large clusters.
Now, the authorized_key module is essentially just generating an authorized_keys file with a) the keys which already exist within authorized_keys and b) the public keys of each node on the cluster. Instead of repeatedly generating an authorized_keys file on each box individually, we can construct the authorized_keys file on the control node and deploy it to each box.
The authorized_keys file itself can be generated with assemble; this will take all of the gathered keys and concatenate them into a single file. However, if we just synchronize or copy this file over, we'll wipe out any non-cluster keys added to authorized_keys. To avoid this, we can use blockinfile. blockinfile can manage the cluster keys added by Ansible. We'll be able to add new keys while removing those which are outdated.
- hosts: cluster
name: create node user and generate keys
tasks:
- name: create node user
user:
name: "{{ node_user }}"
password: "{{ node_user_pass |password_hash('sha512') }}"
shell: /bin/bash
create_home: yes
generate_ssh_key: yes
- name: fetch all public keys from managed nodes to manager
fetch:
src: "/home/{{ node_user }}/.ssh/id_rsa.pub"
dest: "/tmp/keys/{{ ansible_host }}-id_rsa.pub"
flat: yes
become: yes
- hosts: localhost
name: generate authorized_keys file
tasks:
- name: Assemble authorized_keys from a directory
assemble:
src: "/tmp/keys"
dest: "/tmp/authorized_keys"
- hosts: cluster
name: update authorized_keys file
tasks:
- name: insert/update configuration using a local file
blockinfile:
block: "{{ lookup('file', '/tmp/authorized_keys') }}"
dest: "/home/{{ node_user }}/.ssh/authorized_keys"
backup: yes
create: yes
owner: "{{ node_user }}"
group: "{{ node_group }}"
mode: 0600
become: yes
As-is, this solution isn't easily compatible with roles; roles are designed to only handle a single value for hosts (a host, group, set of groups, etc), and the above solution requires switching between a group and localhost.
We can remedy this with delegate_to, although it may be somewhat inefficient with large clusters, as each node in the cluster will try assembling authorized_keys. Depending on the overall structure of the ansible project (and the size of the team working on it), this may or may not be ideal; when skimming a large script with delegate_to, it can be easy to miss that something's being performed locally.
- hosts: cluster
name: create node user and generate keys
tasks:
- name: create node user
user:
name: "{{ node_user }}"
password: "{{ node_user_pass |password_hash('sha512') }}"
shell: /bin/bash
create_home: yes
generate_ssh_key: yes
- name: fetch all public keys from managed nodes to manager
fetch:
src: "/home/{{ node_user }}/.ssh/id_rsa.pub"
dest: "/tmp/keys/{{ ansible_host }}-id_rsa.pub"
flat: yes
- name: Assemble authorized_keys from a directory
assemble:
src: "/tmp/keys"
dest: "/tmp/authorized_keys"
delegate_to: localhost
- name: insert/update configuration using a local file
blockinfile:
block: "{{ lookup('file', '/tmp/authorized_keys') }}"
dest: "/home/{{ node_user }}/.ssh/authorized_keys"
backup: yes
create: yes
owner: "{{ node_user }}"
group: "{{ node_group }}"
mode: 0600
become: yes

Using register in playbook for multiple clients

Hello Everyone.
First task in my playbook will be executed in server.
Second task will be executed in clients. ##
First task : generate token numbers for all clients listed in inventory
- hosts: Server
vars:
clients:
- clientA
- ClientB
tasks:
- name: generate ticket on server and save it as a variable
shell: /path/to/bin ticket {{ clients }}
register: ticket
Second task: Make clients to use generated token specific to them.
(Example: ClientA should take ticket {{ hostvars['server']['ticket'][0]['stdout'] }}
output example for one client: "stdout": "9338e126e8dd454820870b3ba19f5344334c8b1d" ##
Note: below play is for one client
- hosts: ClientA
tasks:
shell: /path/to/bin --key /path/to/store-key/ticket.key --ticket {{ hostvars['server']['ticket']['stdout'] }}
Above plays works completely fine with one client but no idea to write play for multiple clients (in loop)
Need inputs to write shell value for below play (for multiple clients) ##
- hosts: "{{ clients }}"
vars:
clients:
- clientA
- ClientB
tasks:
shell: /path/to/bin --key /path/to/store-key/ticket.key --ticket !!!!!!!!Please your input here !!!!!!!!!
How can we achieve it?
##
One of possible solutions is to
Add to hosts in clients group index of the host
clients:
hosts:
clientA:
uid: 0
<etc>
clientB:
uid: 1
<etc>
Add loop to server part (see below)
Address the client's token by its uid as array index in ticket variable
- hosts: serverA
tasks:
- name: generate ticket on server and save it as a variable
shell: /path/to/bin ticket {{ item }}
register: ticket
with_items:
- "{{ groups['clients'] }}"
- hosts: clients
tasks:
- name: checkticket
shell: /path/to/bin --key /path/to/store-key/ticket.key --ticket {{ hostvars['serverA']['ticket']['results'][uid]['stdout'] }}

Ansible Azure Dynamic Inventory and Sharing variables between hosts in a single playbook

Problem: referencing a fact about a host ( in this case, the private ip ) from another host in a playbook using a wildcard only seems to work in the "Host" part of a playbook, not inside a task. vm_ubuntu* cannot be used in a task.
In a single playbook, I have a couple of hosts, and because the inventory is dynamic, I don't have the hostname ahead of time as Azure appends an identifier after it has been created.
I am using TF to create.
And using the Azure dynamic inventory method.
I am calling my playbook like this, where myazure_rm.yml is a bog standard azure dynamic inventory method, as of the time of this writing.
ansible-playbook -i ./myazure_rm.yml ./bwaf-playbook.yaml --key-file ~/.ssh/id_rsa --u azureuser
My playbook looks like this ( abbreviated ).
- hosts: vm_ubuntu*
tasks:
- name: housekeeping
set_fact:
vm_ubuntu_private_ip="{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
#"
- debug: var=vm_ubuntu_private_ip
- hosts: vm_bwaf*
connection: local
vars:
vm_bwaf_private_ip: "{{private_ipv4_addresses | join }}"
vm_bwaf_public_ip: "{{ public_ipv4_addresses | join }}"
vm_ubuntu_private_ip: "{{ hostvars['vm_ubuntu*']['ip'] }}"
api_url: "http://{{ vm_bwaf_public_ip }}:8000/restapi/{{ api_version }}"
#"
I am answering my own question to get rep, and to to help others of course.
I also want to thank the person ( https://stackoverflow.com/users/4281353/mon ) who came up with this first, which appears in here: How do I set register a variable to persist between plays in ansible?
- name: "Save private ip to dummy host"
add_host:
name: "dummy_host"
ip: "{{ vm_ubuntu_private_ip }}"
And then this can be referenced in the other host in the playbook like this:
- hosts: vm_bwaf*
connection: local
vars:
vm_bwaf_private_ip: "{{private_ipv4_addresses | join }}"
vm_bwaf_public_ip: "{{ public_ipv4_addresses | join }}"
vm_ubuntu_private_ip: "{{ hostvars['dummy_host']['ip'] }}"

Splunkorwarder using ansible

I would like to monitor multiple logs on the universal forwarder. How can i do this? Also when I set forward-server am running out in error. with Enable boot-start somehow i have to accept license manually to finish up the installation. Any suggestions, please?
- name: connect forward server to Splunk server
command: "{{ splunkbin }} add forward-server {{ item }} -auth {{ splunkcreds }}"
with_items: "{{ splunkserver }}"
when: splunkserver is defined
notify: restart_splunk
- name: Enable Boot Start
command: "{{ splunkbin }} enable boot-start"
- name: add temporary monitor to create directory
command: "{{ splunkbin }} add monitor /etc/hosts -auth {{ splunkcreds }}"
notify: restart_splunk
Use the following to accept the license without prompting
- name: Enable Boot Start
command: "{{ splunkbin }} enable boot-start --accept-license"

Ansible filter output start with specific letter

I have a simple playbook and return available update package name as below. I would like to filter the output start with specific letter example to get package name start with 'n' letter. Any thoughts would be much appreciated :-)
---
- name: yum list updates
hosts: all
tasks:
- name: get updates list
yum:
list=updates
register: yum
- name: set fact
set_fact:
package_name: "{{ yum.results | map(attribute='name')| list }}"
Try
package_name: "{{ yum.results|selectattr('name', 'search', '^n')|list }}"
(not tested)

Resources