Ansible "nsupdate" module adding zone to record value - dns

I'm trying to use the nsupdate module to update records but I'm having mixed success. While the records do get added, I'm getting the zone appended at the end of the value.
For example:
I want a cname called mycname.domain1.com pointed to shawarmas.domain2.com. After I run the playbook I end up with an entry that looks like this:
mycname.domain1.com. 5 IN CNAME shawarmas.domain2.com.domain1.com
Is there something wrong in my playbook that is causing this?
Playbook:
---
- hosts: myserver
tasks:
- debug:
msg: "{{ value }}"
- name: "Add record to escapia zone"
nsupdate:
key_name: "ddns"
key_secret: "******"
server: "dnsserver"
record: "{{ record }}"
type: "{{ type }}"
value: "{{ value }}"
ttl: 5
Run Command:
ansible-playbook -i inv -e "record=record-test.example.com.
type=CNAME value=test.different.com" exampledns.yml -v
Ansible output:
changed: [myserver] => changed=true
dns_rc: 0
dns_rc_str: NOERROR
record:
record: record-test.example.com.
ttl: 5
type: CNAME
value:
- test.different.com
zone: example.com.
DNS result:
;; ANSWER SECTION:
record-test.example.com. 5 IN CNAME test.different.com.example.com

Usually, you need to append a . to the end of the value to make it full qualified. Without the . it is unqualified and appending the zone.
Try with:
ansible-playbook -i inv -e "record=record-test.example.com. type=CNAME value=test.different.com." exampledns.yml -v

Related

Ansible - Find files via a list with their name and delete them

im currently running into an issue with deleting certain files from our thumbor cache with ansible. After alot of snipping I receive a list with the file names and im running following ansible task to find and delete them:
shell: find . -name {{ item }} -exec rm "{}" \;
args:
chdir: "{{ thumbor_data_path }}"
ignore_errors: true
loop:
- deletion_hash
when: file_url is defined and deletion_hash | length > 0
the list is definitly filled with the correct names of files I know exist and the task itself marks himself as changed, but the files are not getting deleted. The names of the files are sha1 hashes, and are two directories deep.
Is there something wrong with the shell script?
Example of the deletion_hash list:
"msg": [
"115b744b9f6b23bbad3b6181c858cb953136",
"f52f17b2cca937e5586751ff2e938979890b",
"1c39661a0925b3cdb3b524983aaf6cccd6ee",
"1afc79a9e0e3c07ff0e95e1af3b5cb7ae54c",
"424e9159fe652f47c8e01d0aa85a86fbefed",
"11e4994789f24537d6feea085d2bf39c355b",
"a1d2fe0e122d37555df4062d4c0a5d10b651",
"aef976fc897a87091be5a8d5a11698e19591",
"e79f3ee1e6ccb3caff288b0028e031d75d77",
"9448e5e49679c908263922debdffff68eecb",
"a3933be52277a341906751c3da2dfb07ccd8",
"bef3370862a7504f7857be396d5a3139f5c0",
"8cc0cbe847234af96c0463d49c258c85d50f",
"1e7bf6110dcf994d1270682939e14416fc6e",
"d21dae2c047895129e7c462f6ddc4e512a58",
"c107b29b3185171ec46b479352fab6c97ad2"
]
You can try using the file module; this comes with an assumption that the thumbor_data_path variable does not end with a /; if it does, you need to modify this a bit.
- name: Remove file (delete file)
ansible.builtin.file:
path: "{{ thumbor_data_path }}/{{ item }}"
state: absent
loop: deletion_hash
when: file_url is defined and deletion_hash | length > 0

Ansible to print ps on new row of csv file

Newbie ansible user here.
I'm trying to print output of ps into csv file but somehow its printing on next column rather than next row.
This is my playbook xml:
- name: Write running process into a csv file
hosts: servers
gather_facts: yes
vars:
output_path: "./reports/"
filename: "process_{{ date }}.csv"
tasks:
- name: CSV - Generate output filename
set_fact: date="{{lookup('pipe','date +%Y%m%d%H%M%S')}}"
run_once: true
- name: CSV - Create file and set the header
lineinfile:
dest: "{{ output_path }}/{{ filename }}"
line:
PID,Started,CPU,Memory,User,Process
create: yes
state: present
delegate_to: localhost
- name: CSV - Get ps cpu
ansible.builtin.shell:
ps -e -o %p, -o lstart -o ,%C, -o %mem -o user, -o %c --no-header
register: ps
- name: CSV - Write into csv file
lineinfile:
insertafter: EOF
dest: "{{ output_path }}/{{ filename }}"
line: "{inventory_hostname}},{{ps.stdout_lines}}"
loop: "{{ ps.stdout_lines }}"
delegate_to: localhost
- name: CSV - Blank lines removal
lineinfile:
path: "./{{ output_path }}/{{ filename }}"
state: absent
regex: '^\s*$'
delegate_to: localhost
current output is like
Example of desired output :
... but somehow its printing on next column rather than next row ... current output is like ...
This is because within your task "CSV - Write into CSV file" you are printing out all stdout_lines for each iteration steps instead of one, the line for the iteration step only.
To print out line by line one could use an approach like
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: CSV - Get ps cpu
shell:
cmd: ps -e -o %p, -o lstart -o ,%C, -o %mem -o user, -o %c --no-header
register: ps
- name: Show 'stdout_lines' one line per iteration for CSV
debug:
msg: "{{ inventory_hostname }}, {{ ps.stdout_lines[item | int] }}"
loop: "{{ range(0, ps.stdout_lines | length) | list }}"

I am getting error like ERROR! unexpected parameter type in action: <class 'ansible.parsing.yaml.objects.AnsibleSequence'>, when running ansible

When i am trying to run this code in ansible, i am getting error like
ERROR! unexpected parameter type in action: <class 'ansible.parsing.yaml.objects.AnsibleSequence'
The error appears to be in '/home/c22377/icoms1.yml': line 15, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
handlers:
- name: starting one time job
^ here
I need to use handlers, can you correct me?
---
- hosts: catl
name: "checking file ran or not"
tasks:
- shell: tail -1 test/test.log|awk '{print $8,$9,$10}'
register: result
- shell: date | awk '{print $1,$2,$3}'
name: checking todays date
register: time
- debug:
msg: "{{ result.stdout }}"
when: result.stdout == time.stdout
notify: starting one time job
handlers:
- name: starting one time job
tasks:
- shell: date
Change from:
handlers:
- name: starting one time job
tasks:
- shell: date
to:
handlers:
- name: starting one time job
shell: date

Is there a way to use if..else logic in conditional filters in ansible?

I want to hard code 2 different values based on variable stdout in a single play.If a service is running then i want to hard code value as good else bad.How to use this logic in ansible?
I can able to hard code one value based on status result.
---
- hosts: localhost
connection: local
gather_facts: false
vars:
tasks:
- name: load var from file
include_vars:
file: /tmp/var.json
name: imported_var
- name: Checking mysqld status
shell: service mysqld status
register: mysqld_stat
- name: Checking mysqld status
shell: service httpd status
register: httpd_stat
- name: append more key/values
set_fact:
imported_var: "{{ imported_var| default([]) | combine({ 'mysqld_status': 'good' })}}"
when: mysqld_stat.rc == 0
- name: append more key/values
set_fact:
imported_var: "{{ imported_var| default([]) | combine({ 'httpd_status': 'good' })}}"
when: httpd_stat.rc == 0
- name: write var to file
copy:
content: "{{ imported_var | to_nice_json }}"
dest: /tmp/final.json
I want to hard code mysqld_status : Good if mysqld_stat.rc == 0 or mysqld-status: Bad if mysqld_stat.rc != 0.Is it possible to achieve in single play.(i.e) in single command
There are many of ways of approaching this problem. You could just add a second set_fact that runs when mysqld_stat.rc != 0. In the following example, only one of the two set_fact tasks will run:
- name: append more key/values
set_fact:
imported_var: "{{ imported_var| default({}) | combine({ 'mysqld_status': 'good' })}}"
when: mysqld_stat.rc == 0
- name: append more key/values
set_fact:
imported_var: "{{ imported_var| default({}) | combine({ 'mysqld_status': 'bad' })}}"
when: mysqld_stat.rc != 0
You could instead use Ansible's ternary filter:
- name: append more key/values
set_fact:
imported_var: "{{ imported_var| default({}) | combine({ 'mysqld_status': (mysqld_stat.rc == 0)|ternary('good', 'bad') })}}"

ansible get the interface name by providing an IP address

I have two interfaces in machine(linux). One interface has been addressed and second is down without any IP.
Now I would like to get the interface name matched with IP what I provide as value in ansible.
I was trying something like this:
- name: interface name from provides IP
set_fact:
interface_name="{{ item }}"
with_items:
- "{{ ansible_interfaces | map('replace', '-','_') | list }}"
when: hostvars[ansible_fqdn]['ansible_'~item]['ipv4']['address'] == PROVIDED_IP
It works good when all interfaces have IP address but problem is when one interface has no IP then I have error:
'dict object' has no attribute 'ipv4'
Is possible to get interface name without getting errors?
You can do this using jinja:
- name: Get interface name from provided IP
set_fact:
interface_name: "{{ ansible_interfaces | map('regex_replace', '^', 'ansible_') | map('extract', vars) | selectattr('ipv4.address', 'match', '192\\.168\\.16\\.200') | map(attribute='device') | first }}"
{{ ansible_interfaces | map('regex_replace', '^', 'ansible_') | map('extract', vars) gets you a list of interfaces, where each interface is a dictionary with information about the interface. You can then filter and map the list to get what you need.
try this playbook, just set the IP you want to search for:
- hosts: localhost
gather_facts: true
vars:
desired_interface_name: ""
target_interface_name: "192.168.16.200"
tasks:
- name: parse interfaces
set_fact:
desired_interface_name="{{ item }}"
when: hostvars[inventory_hostname]['ansible_{{item}}']['ipv4']['address'] == target_interface_name
with_items:
- "{{ ansible_interfaces }}"
- name: print result
debug:
var: desired_interface_name

Resources