i hope everyone can help me with my ansible task problem. I deploy the snmp configurations via ansible on my servers and work with snmp-extend to trigger my scripts over snmp with certain OIDs. After my playbook has run and ansible deploy the snmp configurations, i manually execute the following command to become the OID for certain extend, for example:
snmptranslate -On NET-SNMP-EXTEND-MIB::nsExtendOutput1Line.\"folder-size-/home\"
This part i would like to do automatically via ansible, i have the variables:
snmp_mountpoints_extends:
- folder-size
- folder-avail
- folder-used
and in my inventory I define for host the following variables:
server1:
custom_mountpoints:
- /home
- /opt
my ansible part:
name: Generate OIDs for custom inventroy variables
become: yes
shell: 'snmptranslate -On NET-SNMP-EXTEND-MIB::nsExtendOutput1Line.\"{{ item }}-{{ custom_mountpoints[0] }}\"'
with_items:
"{{ snmp_mountpoints_extends }}"
register: custom_mountpoints_output
when:
- custom_mountpoints is defined
- name: print output from custom_mountpoints_output
debug: msg={{ custom_mountpoints_output }}
This work fine but only for first host variable /home. How can I iterate over my custom_mountpoints with each vars from snmp_mountpoints_extends?
thank you in advance
According Ansible documentation Iterating over nested lists you may use the following approach
---
- hosts:localhost
become: false
gather_facts: false
vars:
server1:
custom_mountpoints:
- /home
- /opt
snmp_mountpoints_extends:
- folder-size
- folder-avail
- folder-used
tasks:
- name: Iterating over nested lists
debug:
msg: "{{ item[0] }} and {{ item[1] }}"
loop: "{{ server1.custom_mountpoints | product(snmp_mountpoints_extend) | list }}"
resulting into the desired output.
/home and folder-size
/home and folder-avail
/home and folder-used
/opt and folder-size
/opt and folder-avail
/opt and folder-used
Related
we want to create ansible code that ask interactive questions like the bash script
for now we have the following bash script, with diff 43 Questions , that finally create ini file according to our Questions
bash /home/gentwo.bash
how many machines?23
how many datanode services?
IP address for first machine - andnenda01?
.
.
.
---
as we know we cant do the same with ansible as the following:
- hosts: 17.12.22.56
gather_facts: yes
vars:
app_name: interactive process
ansible_user: root
ansible_password: XXXXXXXXXXX
tasks:
- name: interactive process
script: "/home/gentwo.bash"
register: results
so what is the equivalent approach with ansible?
You can use prompts for interactive input
e.g. from docs:
---
- hosts: all
vars_prompt:
- name: username
prompt: What is your username?
private: no
- name: password
prompt: What is your password?
tasks:
- name: Print a message
ansible.builtin.debug:
msg: 'Logging in as {{ username }}'
Ansible Doc: https://docs.ansible.com/ansible/latest/user_guide/playbooks_prompts.html
Only output from the first command is written to the file.
How do I make it write output from all of the commands to the file?
---
- name: run show commands
hosts: nexus1
gather_facts: False
tasks:
- name: run show commands on nexus
nxos_command:
commands:
- show hostname
- show ip route
- show interface
- show ip interface vrf all
- show hsrp
register: output
- name: Copy to server
copy:
content: "{{ output.stdout[0] }}"
dest: "/home/CiscoOutPut/{{ inventory_hostname }}.txt"
You're only asking for output from the first command. output.stdout is a list, one item for the output of each command. When ask for output.stdout[0], you're asking for only the first result.
If you want to write the output of all commands to a file, then something like:
- name: Copy to server
copy:
content: "{{ '\n'.join(output.stdout) }}"
dest: "/home/CiscoOutPut/{{ inventory_hostname }}.txt"
I want to use ansible setup module to retrieve hosts specs and I tried with a bash for loop.
Ansible version: 2.4
My hosts inventory has been defined in a group of machines which I called rhelmachines
I would like to collect the following list of variables called "specs"
declare -a specs=("ansible_all_ipv4_addresses" "ansible_processor" "ansible_processor_cores" "ansible_uptime_seconds")
I am then trying to include the ansible command in a for bash loop:
for i in "${specs[#]}"
do
ansible rhelmachines -m setup -a 'filter='$i'
done
how can I concatenate multiple filters in one connection only ?
Thanks!
With a little sed hackery to convert ansible's output to JSON, you can use jq to extract only the pieces you need:
ansible -m setup localhost | sed -e 's/^[[:alpha:]].*[|].* [>][>] {$/{/' | jq -n '
[inputs |
.ansible_facts as $facts |
$facts.ansible_hostname as $hostname |
{($hostname): {
"ipv4_addresses": $facts.ansible_all_ipv4_addresses,
"processor": $facts.ansible_processor[0],
"cores": $facts.ansible_processor_cores,
"uptime": $facts.ansible_uptime_seconds}}] | add'
...generates output of the form:
{
"my-current-hostname": {
"ipv4_addresses": [
"192.168.119.129"
],
"processor": "Intel(R) Core(TM) i7-6700HQ CPU # 2.60GHz",
"cores": 1,
"uptime": null
}
}
(run with ansible 1.4.5, which doesn't generate uptime).
As one of the possible solution, I implemented an Ansible code exploiting Ansible facts. I implemented it first gathering the ansible facts. Then, I used a local_action and a loop. The loop indices are the several ansible facts. For every fact I am writing out a line of a file. In this way I am getting a file composed by all the ansible facts I declared in the loop for the rhelmachines.
---
- hosts: rhelmachines
gather_facts: True
tasks:
- name: Gather Facts
setup:
gather_subset=all
register: facts
- debug:
msg: "{{ facts.ansible_facts.ansible_all_ipv4_addresses }}"
- name: copy content from facts to output file
local_action:
module: lineinfile
line: "{{ item }}"
path: /tmp/assessments/facts.txt
loop:
- "{{ facts.ansible_facts.ansible_all_ipv4_addresses }}"
- "{{ facts.ansible_facts.ansible_all_ipv6_addresses }}"
I took #Luigi Sambolino answer and make it better. His answer was failing on more than one host in the inventory. He proposed using lineinfile which has one con in this situation - every fact that was the same as other machines were omitted. Other drawback was that results wasn't sorted together, all was mixed.
I needed to take some basic information about systems, like IP, OS version, and so on. Here's my playbook:
- hosts: all
gather_facts: true
ignore_unreachable: true
tasks:
- name: get the facts
setup:
gather_subset=all
register: facts
- name: remove file
local_action:
module: file
path: results
state: absent
- name: save results in file
local_action:
module: shell
cmd: echo "{{ item }}" >> results
with_together:
- "{{ facts.ansible_facts.ansible_default_ipv4.address }}"
- "{{ facts.ansible_facts.ansible_architecture }}"
- "{{ facts.ansible_facts.ansible_distribution }}"
- "{{ facts.ansible_facts.ansible_distribution_version }}"
- "{{ facts.ansible_facts.ansible_hostname }}"
- "{{ facts.ansible_facts.ansible_kernel }}"
Now the results look like this:
...
['10.200.1.21', 'x86_64', 'Ubuntu', '18.04', 'bacula', '4.15.18-7-pve']
['10.200.2.53', 'x86_64', 'Ubuntu', '18.04', 'webserver', '4.15.18-27-pve']
...
Square brackets can be deleted by sed and we have a nice CSV file that can be used with any spreadsheet, for example.
specs=( "ansible_all_ipv4_addresses"
"ansible_processor"
"ansible_processor_cores"
"ansible_uptime_seconds" )
args=( )
for spec in "${specs[#]}"; do args+=( -a "$spec" ); done
ansible rhelmachines -m setup "${args[#]}"
...will result in your final command being equivalent to:
ansible rhelmachines -m setup \
-a ansible_all_ipv4_addresses \
-a ansible_processor \
-a ansible_processor_cores \
-a ansible_uptime_seconds
This question already has answers here:
Ansible delegate and run_once
(1 answer)
Running a task on a single host always with Ansible?
(1 answer)
Closed 4 years ago.
I have below ansible playbook in which I want to do few things:
I want to run first task only once in entire time this playbook will be running since I have 100 machines in servers group so "task one" should run only once in the starting.
I want to run task three once as well but at the very end of this playbook when it is working on last machine. I am not sure if this is possible to do as well.
What I need to do is:
Copy clients.tar.gz file from some remote servers to local box (inside /tmp folder) where my ansible is running.
And then unarchive this "clients.tar.gz" file on all the servers specified in "servers" group.
And at the end delete this tar.gz file from /tmp folder.
Below is my ansible script: Is this possible to do by any chance?
---
- name: copy files
hosts: servers
serial: 10
tasks:
- name: copy clients.tar.gz file. Run this task only once during starting
shell: "(scp -o StrictHostKeyChecking=no goldy#machineA:/process/snap/20180418/clients.tar.gz /tmp/) || (scp -o StrictHostKeyChecking=no goldy#machineB:/process/snap/20180418/clients.tar.gz /tmp/) || (scp -o StrictHostKeyChecking=no goldy#machineC:/process/snap/20180418/clients.tar.gz /tmp/)"
- name: copy and untar latest clients.tar.gz file
unarchive: src=/tmp/clients.tar.gz dest=/data/files/
- name: Remove previous tarFile. Run this task only once at the end of this playbook
file: path=/tmp/clients.tar.gz
state=absent
- name: sleep for few seconds
pause: seconds=20
Update
I went through the linked question and it looks like it can be done like this?
---
- name: copy files
hosts: servers
serial: 10
tasks:
- name: copy clients.tar.gz file. Run this task only once during starting
shell: "(scp -o StrictHostKeyChecking=no goldy#machineA:/process/snap/20180418/clients.tar.gz /tmp/) || (scp -o StrictHostKeyChecking=no goldy#machineB:/process/snap/20180418/clients.tar.gz /tmp/) || (scp -o StrictHostKeyChecking=no goldy#machineC:/process/snap/20180418/clients.tar.gz /tmp/)"
delegate_to: "{{ groups['servers'] | first }}"
run_once: true
- name: copy and untar latest clients.tar.gz file
unarchive: src=/tmp/clients.tar.gz dest=/data/files/
- name: Remove previous tarFile. Run this task only once at the end of this playbook
file: path=/tmp/clients.tar.gz
state=absent
delegate_to: "{{ groups['servers'] | last }}"
run_once: true
- name: sleep for few seconds
pause: seconds=20
I am trying to read an Environment variable from Target Linux Host using Ansible playbook. I tried all the below tasks as per the document but there is no result.
- name: Test1
debug: msg="{{ ansible_env.BULK }}"
delegate_to: "{{ target_host }}"
- name: Test2
shell: echo $BULK
delegate_to: "{{ target_host }}"
register: foo
- debug: msg="{{ foo.stdout }}"
- name: Test3
debug: msg="{{ lookup('env','BULK')}} is an environment variable"
delegate_to: "{{ target_host }}"
The Environment variable "BULK" is not set in the local Host where I am executing the playbook, so I assume its returning nothing. Instead of BULK, if I use "HOME" which is always available, it returns the result. If I SSH into the target_host I am able to run echo $BULK without any issue.
How to obtain the Environment variable from the remote host?
If I SSH into the target_host I am able to run echo $BULK without any issue.
Most likely, because BULK is set in one of the rc-files sourced only in an interactive session of the shell on the target machine. And Ansible's gather_facts task runs in a non-interactive one.
How to obtain the Environment variable from the remote host?
Move the line setting the BULK variable to a place where it is sourced regardless of the session type (where exactly, depends on the target OS and shell)
See for example: https://unix.stackexchange.com/a/170499/133107 for hints.
source /etc/profile and then grep the env
my solution is not perfect, but often works. for example i want to check if the remote_host has environment variables for proxy servers, i do the following:
as an adhoc ansible command:
ansible remote_host -b -m shell -a '. /etc/profile && (env | grep -iP "proxy")'
Explanation:
i prefere the shell module, it does what i expect... the same if i do it on a shell
. /etc/profile sources the /etc/profile. And this file sources other files like under /etc/profile.d . So after this i have the fix machine part of the environment.
env | grep -iP "proxy" then filter the expanded environment for my variables i am looking for
Remote environment variables will be automatically gathered by ansible during the "Gathering Facts" task.
You can inspect them like this:
- name: inspect env vars
debug:
var: ansible_facts.env
In your case try this:
- name: Test4
debug: msg="{{ ansible_facts.env.BULK }} is the value of an environment variable"